00:00:00.001 Started by upstream project "autotest-nightly" build number 4173 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3535 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.182 Fetching changes from the remote Git repository 00:00:00.183 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.247 Using shallow fetch with depth 1 00:00:00.247 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.247 > git --version # timeout=10 00:00:00.300 > git --version # 'git version 2.39.2' 00:00:00.300 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.337 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.337 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.023 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.036 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.047 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:09.047 > git config core.sparsecheckout # timeout=10 00:00:09.059 > git read-tree -mu HEAD # timeout=10 00:00:09.073 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:09.093 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:09.094 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:09.194 [Pipeline] Start of Pipeline 00:00:09.207 [Pipeline] library 00:00:09.208 Loading library shm_lib@master 00:00:09.209 Library shm_lib@master is cached. Copying from home. 00:00:09.228 [Pipeline] node 00:00:09.236 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.238 [Pipeline] { 00:00:09.247 [Pipeline] catchError 00:00:09.248 [Pipeline] { 00:00:09.260 [Pipeline] wrap 00:00:09.270 [Pipeline] { 00:00:09.277 [Pipeline] stage 00:00:09.279 [Pipeline] { (Prologue) 00:00:09.503 [Pipeline] sh 00:00:09.783 + logger -p user.info -t JENKINS-CI 00:00:09.804 [Pipeline] echo 00:00:09.805 Node: GP11 00:00:09.814 [Pipeline] sh 00:00:10.113 [Pipeline] setCustomBuildProperty 00:00:10.126 [Pipeline] echo 00:00:10.128 Cleanup processes 00:00:10.135 [Pipeline] sh 00:00:10.422 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.422 2762558 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.437 [Pipeline] sh 00:00:10.721 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.721 ++ grep -v 'sudo pgrep' 00:00:10.721 ++ awk '{print $1}' 00:00:10.721 + sudo kill -9 00:00:10.721 + true 00:00:10.735 [Pipeline] cleanWs 00:00:10.744 [WS-CLEANUP] Deleting project workspace... 00:00:10.744 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.750 [WS-CLEANUP] done 00:00:10.754 [Pipeline] setCustomBuildProperty 00:00:10.769 [Pipeline] sh 00:00:11.045 + sudo git config --global --replace-all safe.directory '*' 00:00:11.133 [Pipeline] httpRequest 00:00:11.632 [Pipeline] echo 00:00:11.634 Sorcerer 10.211.164.101 is alive 00:00:11.653 [Pipeline] retry 00:00:11.656 [Pipeline] { 00:00:11.671 [Pipeline] httpRequest 00:00:11.675 HttpMethod: GET 00:00:11.676 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.676 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.698 Response Code: HTTP/1.1 200 OK 00:00:11.699 Success: Status code 200 is in the accepted range: 200,404 00:00:11.699 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:26.724 [Pipeline] } 00:00:26.742 [Pipeline] // retry 00:00:26.751 [Pipeline] sh 00:00:27.036 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:27.052 [Pipeline] httpRequest 00:00:27.479 [Pipeline] echo 00:00:27.481 Sorcerer 10.211.164.101 is alive 00:00:27.491 [Pipeline] retry 00:00:27.493 [Pipeline] { 00:00:27.507 [Pipeline] httpRequest 00:00:27.512 HttpMethod: GET 00:00:27.513 URL: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:27.513 Sending request to url: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:27.535 Response Code: HTTP/1.1 200 OK 00:00:27.536 Success: Status code 200 is in the accepted range: 200,404 00:00:27.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:25.710 [Pipeline] } 00:01:25.728 [Pipeline] // retry 00:01:25.736 [Pipeline] sh 00:01:26.025 + tar --no-same-owner -xf spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:29.328 [Pipeline] sh 00:01:29.619 + git -C spdk log --oneline -n5 00:01:29.619 bbce7a874 event: move struct spdk_lw_thread to internal header 00:01:29.619 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:01:29.619 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:01:29.619 0ce363beb spdk_log: introduce spdk_log_ext API 00:01:29.619 412fced1b bdev/compress: unmap support. 00:01:29.631 [Pipeline] } 00:01:29.647 [Pipeline] // stage 00:01:29.657 [Pipeline] stage 00:01:29.660 [Pipeline] { (Prepare) 00:01:29.680 [Pipeline] writeFile 00:01:29.698 [Pipeline] sh 00:01:29.986 + logger -p user.info -t JENKINS-CI 00:01:30.001 [Pipeline] sh 00:01:30.288 + logger -p user.info -t JENKINS-CI 00:01:30.303 [Pipeline] sh 00:01:30.592 + cat autorun-spdk.conf 00:01:30.593 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.593 SPDK_TEST_NVMF=1 00:01:30.593 SPDK_TEST_NVME_CLI=1 00:01:30.593 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.593 SPDK_TEST_NVMF_NICS=e810 00:01:30.593 SPDK_RUN_ASAN=1 00:01:30.593 SPDK_RUN_UBSAN=1 00:01:30.593 NET_TYPE=phy 00:01:30.601 RUN_NIGHTLY=1 00:01:30.606 [Pipeline] readFile 00:01:30.634 [Pipeline] withEnv 00:01:30.636 [Pipeline] { 00:01:30.650 [Pipeline] sh 00:01:30.959 + set -ex 00:01:30.959 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:30.959 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.959 ++ SPDK_TEST_NVMF=1 00:01:30.959 ++ SPDK_TEST_NVME_CLI=1 00:01:30.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.959 ++ SPDK_TEST_NVMF_NICS=e810 00:01:30.959 ++ SPDK_RUN_ASAN=1 00:01:30.959 ++ SPDK_RUN_UBSAN=1 00:01:30.959 ++ NET_TYPE=phy 00:01:30.959 ++ RUN_NIGHTLY=1 00:01:30.959 + case $SPDK_TEST_NVMF_NICS in 00:01:30.959 + DRIVERS=ice 00:01:30.960 + [[ tcp == \r\d\m\a ]] 00:01:30.960 + [[ -n ice ]] 00:01:30.960 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:30.960 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:30.960 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:30.960 rmmod: ERROR: Module irdma is not currently loaded 00:01:30.960 rmmod: ERROR: Module i40iw is not currently loaded 00:01:30.960 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:30.960 + true 00:01:30.960 + for D in $DRIVERS 00:01:30.960 + sudo modprobe ice 00:01:30.960 + exit 0 00:01:30.968 [Pipeline] } 00:01:30.985 [Pipeline] // withEnv 00:01:30.991 [Pipeline] } 00:01:31.005 [Pipeline] // stage 00:01:31.017 [Pipeline] catchError 00:01:31.019 [Pipeline] { 00:01:31.036 [Pipeline] timeout 00:01:31.036 Timeout set to expire in 1 hr 0 min 00:01:31.038 [Pipeline] { 00:01:31.054 [Pipeline] stage 00:01:31.056 [Pipeline] { (Tests) 00:01:31.081 [Pipeline] sh 00:01:31.367 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.367 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.367 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.367 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:31.367 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.367 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:31.367 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:31.367 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:31.367 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:31.367 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:31.367 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:31.367 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.367 + source /etc/os-release 00:01:31.367 ++ NAME='Fedora Linux' 00:01:31.367 ++ VERSION='39 (Cloud Edition)' 00:01:31.367 ++ ID=fedora 00:01:31.367 ++ VERSION_ID=39 00:01:31.367 ++ VERSION_CODENAME= 00:01:31.367 ++ PLATFORM_ID=platform:f39 00:01:31.367 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:31.367 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.367 ++ LOGO=fedora-logo-icon 00:01:31.367 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:31.367 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.367 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:31.367 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.367 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.367 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.367 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:31.367 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.367 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:31.367 ++ SUPPORT_END=2024-11-12 00:01:31.367 ++ VARIANT='Cloud Edition' 00:01:31.367 ++ VARIANT_ID=cloud 00:01:31.367 + uname -a 00:01:31.367 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:31.367 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:32.304 Hugepages 00:01:32.304 node hugesize free / total 00:01:32.304 node0 1048576kB 0 / 0 00:01:32.304 node0 2048kB 0 / 0 00:01:32.304 node1 1048576kB 0 / 0 00:01:32.304 node1 2048kB 0 / 0 00:01:32.304 00:01:32.304 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:32.304 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:32.304 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:32.304 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:32.304 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:32.304 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:32.304 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:32.304 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:32.304 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:32.304 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:32.304 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:32.304 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:32.304 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:32.304 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:32.304 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:32.304 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:32.304 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:32.563 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:32.563 + rm -f /tmp/spdk-ld-path 00:01:32.563 + source autorun-spdk.conf 00:01:32.563 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.563 ++ SPDK_TEST_NVMF=1 00:01:32.563 ++ SPDK_TEST_NVME_CLI=1 00:01:32.563 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.563 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.563 ++ SPDK_RUN_ASAN=1 00:01:32.563 ++ SPDK_RUN_UBSAN=1 00:01:32.563 ++ NET_TYPE=phy 00:01:32.563 ++ RUN_NIGHTLY=1 00:01:32.563 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.563 + [[ -n '' ]] 00:01:32.563 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.563 + for M in /var/spdk/build-*-manifest.txt 00:01:32.563 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:32.563 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.563 + for M in /var/spdk/build-*-manifest.txt 00:01:32.563 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.563 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.563 + for M in /var/spdk/build-*-manifest.txt 00:01:32.563 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.563 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.563 ++ uname 00:01:32.563 + [[ Linux == \L\i\n\u\x ]] 00:01:32.563 + sudo dmesg -T 00:01:32.563 + sudo dmesg --clear 00:01:32.563 + dmesg_pid=2763860 00:01:32.563 + [[ Fedora Linux == FreeBSD ]] 00:01:32.563 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.563 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.563 + sudo dmesg -Tw 00:01:32.563 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.563 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.563 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.563 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.563 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.563 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.563 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.563 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.563 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.563 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.563 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.563 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.563 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.563 Test configuration: 00:01:32.563 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.563 SPDK_TEST_NVMF=1 00:01:32.563 SPDK_TEST_NVME_CLI=1 00:01:32.563 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.563 SPDK_TEST_NVMF_NICS=e810 00:01:32.563 SPDK_RUN_ASAN=1 00:01:32.563 SPDK_RUN_UBSAN=1 00:01:32.563 NET_TYPE=phy 00:01:32.563 RUN_NIGHTLY=1 19:31:22 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:32.563 19:31:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:32.563 19:31:22 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:32.563 19:31:22 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:32.563 19:31:22 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.563 19:31:22 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.563 19:31:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.563 19:31:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.563 19:31:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.563 19:31:22 -- paths/export.sh@5 -- $ export PATH 00:01:32.563 19:31:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.563 19:31:22 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:32.563 19:31:22 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:32.563 19:31:22 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728840682.XXXXXX 00:01:32.563 19:31:22 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728840682.uC0BAE 00:01:32.563 19:31:22 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:32.563 19:31:22 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:32.563 19:31:22 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:32.563 19:31:22 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:32.563 19:31:22 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.563 19:31:22 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:32.563 19:31:22 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:32.563 19:31:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.563 19:31:22 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:32.563 19:31:22 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:32.563 19:31:22 -- pm/common@17 -- $ local monitor 00:01:32.563 19:31:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.563 19:31:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.563 19:31:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.563 19:31:22 -- pm/common@21 -- $ date +%s 00:01:32.564 19:31:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.564 19:31:22 -- pm/common@21 -- $ date +%s 00:01:32.564 19:31:22 -- pm/common@25 -- $ sleep 1 00:01:32.564 19:31:22 -- pm/common@21 -- $ date +%s 00:01:32.564 19:31:22 -- pm/common@21 -- $ date +%s 00:01:32.564 19:31:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728840682 00:01:32.564 19:31:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728840682 00:01:32.564 19:31:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728840682 00:01:32.564 19:31:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728840682 00:01:32.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728840682_collect-cpu-load.pm.log 00:01:32.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728840682_collect-vmstat.pm.log 00:01:32.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728840682_collect-cpu-temp.pm.log 00:01:32.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728840682_collect-bmc-pm.bmc.pm.log 00:01:33.497 19:31:23 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:33.498 19:31:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.498 19:31:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.498 19:31:23 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.498 19:31:23 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.498 Sun Oct 13 05:31:23 PM UTC 2024 00:01:33.498 19:31:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.498 v25.01-pre-55-gbbce7a874 00:01:33.498 19:31:23 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:33.498 19:31:23 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:33.498 19:31:23 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:33.498 19:31:23 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:33.498 19:31:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.756 ************************************ 00:01:33.756 START TEST asan 00:01:33.756 ************************************ 00:01:33.756 19:31:23 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:33.756 using asan 00:01:33.756 00:01:33.756 real 0m0.000s 00:01:33.756 user 0m0.000s 00:01:33.756 sys 0m0.000s 00:01:33.756 19:31:23 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:33.756 19:31:23 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.756 ************************************ 00:01:33.756 END TEST asan 00:01:33.756 ************************************ 00:01:33.756 19:31:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.756 19:31:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.756 19:31:23 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:33.756 19:31:23 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:33.756 19:31:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.756 ************************************ 00:01:33.756 START TEST ubsan 00:01:33.756 ************************************ 00:01:33.756 19:31:23 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:33.756 using ubsan 00:01:33.756 00:01:33.756 real 0m0.000s 00:01:33.756 user 0m0.000s 00:01:33.756 sys 0m0.000s 00:01:33.756 19:31:23 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:33.756 19:31:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.756 ************************************ 00:01:33.756 END TEST ubsan 00:01:33.756 ************************************ 00:01:33.756 19:31:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.756 19:31:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.756 19:31:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.756 19:31:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.756 19:31:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.756 19:31:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.756 19:31:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.756 19:31:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.756 19:31:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:33.756 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:33.756 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.014 Using 'verbs' RDMA provider 00:01:44.552 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:54.529 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:54.529 Creating mk/config.mk...done. 00:01:54.529 Creating mk/cc.flags.mk...done. 00:01:54.529 Type 'make' to build. 00:01:54.529 19:31:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:54.529 19:31:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.529 19:31:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.529 19:31:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.529 ************************************ 00:01:54.529 START TEST make 00:01:54.529 ************************************ 00:01:54.529 19:31:43 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:54.529 make[1]: Nothing to be done for 'all'. 00:02:04.543 The Meson build system 00:02:04.543 Version: 1.5.0 00:02:04.543 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:04.543 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:04.543 Build type: native build 00:02:04.543 Program cat found: YES (/usr/bin/cat) 00:02:04.543 Project name: DPDK 00:02:04.543 Project version: 24.03.0 00:02:04.543 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.543 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.543 Host machine cpu family: x86_64 00:02:04.543 Host machine cpu: x86_64 00:02:04.543 Message: ## Building in Developer Mode ## 00:02:04.543 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.543 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.543 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.543 Program python3 found: YES (/usr/bin/python3) 00:02:04.543 Program cat found: YES (/usr/bin/cat) 00:02:04.543 Compiler for C supports arguments -march=native: YES 00:02:04.543 Checking for size of "void *" : 8 00:02:04.543 Checking for size of "void *" : 8 (cached) 00:02:04.543 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:04.543 Library m found: YES 00:02:04.543 Library numa found: YES 00:02:04.543 Has header "numaif.h" : YES 00:02:04.543 Library fdt found: NO 00:02:04.543 Library execinfo found: NO 00:02:04.543 Has header "execinfo.h" : YES 00:02:04.543 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.543 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.543 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.543 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.543 Run-time dependency openssl found: YES 3.1.1 00:02:04.543 Run-time dependency libpcap found: YES 1.10.4 00:02:04.543 Has header "pcap.h" with dependency libpcap: YES 00:02:04.543 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.543 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.543 Compiler for C supports arguments -Wformat: YES 00:02:04.543 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:04.543 Compiler for C supports arguments -Wformat-security: NO 00:02:04.543 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.543 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.543 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.543 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.543 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.543 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.543 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.543 Compiler for C supports arguments -Wundef: YES 00:02:04.543 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.543 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.543 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.543 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.543 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.543 Program objdump found: YES (/usr/bin/objdump) 00:02:04.543 Compiler for C supports arguments -mavx512f: YES 00:02:04.543 Checking if "AVX512 checking" compiles: YES 00:02:04.543 Fetching value of define "__SSE4_2__" : 1 00:02:04.543 Fetching value of define "__AES__" : 1 00:02:04.543 Fetching value of define "__AVX__" : 1 00:02:04.543 Fetching value of define "__AVX2__" : (undefined) 00:02:04.543 Fetching value of define "__AVX512BW__" : (undefined) 00:02:04.543 Fetching value of define "__AVX512CD__" : (undefined) 00:02:04.543 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:04.543 Fetching value of define "__AVX512F__" : (undefined) 00:02:04.543 Fetching value of define "__AVX512VL__" : (undefined) 00:02:04.543 Fetching value of define "__PCLMUL__" : 1 00:02:04.543 Fetching value of define "__RDRND__" : 1 00:02:04.543 Fetching value of define "__RDSEED__" : (undefined) 00:02:04.543 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.543 Fetching value of define "__znver1__" : (undefined) 00:02:04.543 Fetching value of define "__znver2__" : (undefined) 00:02:04.544 Fetching value of define "__znver3__" : (undefined) 00:02:04.544 Fetching value of define "__znver4__" : (undefined) 00:02:04.544 Library asan found: YES 00:02:04.544 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.544 Message: lib/log: Defining dependency "log" 00:02:04.544 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.544 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.544 Library rt found: YES 00:02:04.544 Checking for function "getentropy" : NO 00:02:04.544 Message: lib/eal: Defining dependency "eal" 00:02:04.544 Message: lib/ring: Defining dependency "ring" 00:02:04.544 Message: lib/rcu: Defining dependency "rcu" 00:02:04.544 Message: lib/mempool: Defining dependency "mempool" 00:02:04.544 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.544 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.544 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.544 Compiler for C supports arguments -mpclmul: YES 00:02:04.544 Compiler for C supports arguments -maes: YES 00:02:04.544 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.544 Compiler for C supports arguments -mavx512bw: YES 00:02:04.544 Compiler for C supports arguments -mavx512dq: YES 00:02:04.544 Compiler for C supports arguments -mavx512vl: YES 00:02:04.544 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.544 Compiler for C supports arguments -mavx2: YES 00:02:04.544 Compiler for C supports arguments -mavx: YES 00:02:04.544 Message: lib/net: Defining dependency "net" 00:02:04.544 Message: lib/meter: Defining dependency "meter" 00:02:04.544 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.544 Message: lib/pci: Defining dependency "pci" 00:02:04.544 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.544 Message: lib/hash: Defining dependency "hash" 00:02:04.544 Message: lib/timer: Defining dependency "timer" 00:02:04.544 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.544 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.544 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.544 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.544 Message: lib/power: Defining dependency "power" 00:02:04.544 Message: lib/reorder: Defining dependency "reorder" 00:02:04.544 Message: lib/security: Defining dependency "security" 00:02:04.544 Has header "linux/userfaultfd.h" : YES 00:02:04.544 Has header "linux/vduse.h" : YES 00:02:04.544 Message: lib/vhost: Defining dependency "vhost" 00:02:04.544 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.544 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.544 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.544 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.544 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.544 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.544 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.544 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.544 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.544 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.544 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:04.544 Configuring doxy-api-html.conf using configuration 00:02:04.544 Configuring doxy-api-man.conf using configuration 00:02:04.544 Program mandb found: YES (/usr/bin/mandb) 00:02:04.544 Program sphinx-build found: NO 00:02:04.544 Configuring rte_build_config.h using configuration 00:02:04.544 Message: 00:02:04.544 ================= 00:02:04.544 Applications Enabled 00:02:04.544 ================= 00:02:04.544 00:02:04.544 apps: 00:02:04.544 00:02:04.544 00:02:04.544 Message: 00:02:04.544 ================= 00:02:04.544 Libraries Enabled 00:02:04.544 ================= 00:02:04.544 00:02:04.544 libs: 00:02:04.544 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.544 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.544 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.544 00:02:04.544 Message: 00:02:04.544 =============== 00:02:04.544 Drivers Enabled 00:02:04.544 =============== 00:02:04.544 00:02:04.544 common: 00:02:04.544 00:02:04.544 bus: 00:02:04.544 pci, vdev, 00:02:04.544 mempool: 00:02:04.544 ring, 00:02:04.544 dma: 00:02:04.544 00:02:04.544 net: 00:02:04.544 00:02:04.544 crypto: 00:02:04.544 00:02:04.544 compress: 00:02:04.544 00:02:04.544 vdpa: 00:02:04.544 00:02:04.544 00:02:04.544 Message: 00:02:04.544 ================= 00:02:04.544 Content Skipped 00:02:04.544 ================= 00:02:04.544 00:02:04.544 apps: 00:02:04.544 dumpcap: explicitly disabled via build config 00:02:04.544 graph: explicitly disabled via build config 00:02:04.544 pdump: explicitly disabled via build config 00:02:04.544 proc-info: explicitly disabled via build config 00:02:04.544 test-acl: explicitly disabled via build config 00:02:04.544 test-bbdev: explicitly disabled via build config 00:02:04.544 test-cmdline: explicitly disabled via build config 00:02:04.544 test-compress-perf: explicitly disabled via build config 00:02:04.544 test-crypto-perf: explicitly disabled via build config 00:02:04.544 test-dma-perf: explicitly disabled via build config 00:02:04.544 test-eventdev: explicitly disabled via build config 00:02:04.544 test-fib: explicitly disabled via build config 00:02:04.544 test-flow-perf: explicitly disabled via build config 00:02:04.544 test-gpudev: explicitly disabled via build config 00:02:04.544 test-mldev: explicitly disabled via build config 00:02:04.544 test-pipeline: explicitly disabled via build config 00:02:04.544 test-pmd: explicitly disabled via build config 00:02:04.544 test-regex: explicitly disabled via build config 00:02:04.544 test-sad: explicitly disabled via build config 00:02:04.544 test-security-perf: explicitly disabled via build config 00:02:04.544 00:02:04.544 libs: 00:02:04.544 argparse: explicitly disabled via build config 00:02:04.544 metrics: explicitly disabled via build config 00:02:04.544 acl: explicitly disabled via build config 00:02:04.544 bbdev: explicitly disabled via build config 00:02:04.544 bitratestats: explicitly disabled via build config 00:02:04.544 bpf: explicitly disabled via build config 00:02:04.544 cfgfile: explicitly disabled via build config 00:02:04.544 distributor: explicitly disabled via build config 00:02:04.544 efd: explicitly disabled via build config 00:02:04.544 eventdev: explicitly disabled via build config 00:02:04.544 dispatcher: explicitly disabled via build config 00:02:04.544 gpudev: explicitly disabled via build config 00:02:04.544 gro: explicitly disabled via build config 00:02:04.544 gso: explicitly disabled via build config 00:02:04.544 ip_frag: explicitly disabled via build config 00:02:04.544 jobstats: explicitly disabled via build config 00:02:04.544 latencystats: explicitly disabled via build config 00:02:04.544 lpm: explicitly disabled via build config 00:02:04.544 member: explicitly disabled via build config 00:02:04.544 pcapng: explicitly disabled via build config 00:02:04.544 rawdev: explicitly disabled via build config 00:02:04.544 regexdev: explicitly disabled via build config 00:02:04.544 mldev: explicitly disabled via build config 00:02:04.544 rib: explicitly disabled via build config 00:02:04.544 sched: explicitly disabled via build config 00:02:04.544 stack: explicitly disabled via build config 00:02:04.544 ipsec: explicitly disabled via build config 00:02:04.544 pdcp: explicitly disabled via build config 00:02:04.544 fib: explicitly disabled via build config 00:02:04.544 port: explicitly disabled via build config 00:02:04.544 pdump: explicitly disabled via build config 00:02:04.544 table: explicitly disabled via build config 00:02:04.544 pipeline: explicitly disabled via build config 00:02:04.544 graph: explicitly disabled via build config 00:02:04.544 node: explicitly disabled via build config 00:02:04.544 00:02:04.544 drivers: 00:02:04.544 common/cpt: not in enabled drivers build config 00:02:04.544 common/dpaax: not in enabled drivers build config 00:02:04.544 common/iavf: not in enabled drivers build config 00:02:04.544 common/idpf: not in enabled drivers build config 00:02:04.544 common/ionic: not in enabled drivers build config 00:02:04.544 common/mvep: not in enabled drivers build config 00:02:04.544 common/octeontx: not in enabled drivers build config 00:02:04.544 bus/auxiliary: not in enabled drivers build config 00:02:04.544 bus/cdx: not in enabled drivers build config 00:02:04.544 bus/dpaa: not in enabled drivers build config 00:02:04.544 bus/fslmc: not in enabled drivers build config 00:02:04.544 bus/ifpga: not in enabled drivers build config 00:02:04.544 bus/platform: not in enabled drivers build config 00:02:04.544 bus/uacce: not in enabled drivers build config 00:02:04.544 bus/vmbus: not in enabled drivers build config 00:02:04.544 common/cnxk: not in enabled drivers build config 00:02:04.544 common/mlx5: not in enabled drivers build config 00:02:04.544 common/nfp: not in enabled drivers build config 00:02:04.544 common/nitrox: not in enabled drivers build config 00:02:04.544 common/qat: not in enabled drivers build config 00:02:04.544 common/sfc_efx: not in enabled drivers build config 00:02:04.544 mempool/bucket: not in enabled drivers build config 00:02:04.544 mempool/cnxk: not in enabled drivers build config 00:02:04.544 mempool/dpaa: not in enabled drivers build config 00:02:04.544 mempool/dpaa2: not in enabled drivers build config 00:02:04.544 mempool/octeontx: not in enabled drivers build config 00:02:04.544 mempool/stack: not in enabled drivers build config 00:02:04.544 dma/cnxk: not in enabled drivers build config 00:02:04.544 dma/dpaa: not in enabled drivers build config 00:02:04.544 dma/dpaa2: not in enabled drivers build config 00:02:04.544 dma/hisilicon: not in enabled drivers build config 00:02:04.544 dma/idxd: not in enabled drivers build config 00:02:04.544 dma/ioat: not in enabled drivers build config 00:02:04.544 dma/skeleton: not in enabled drivers build config 00:02:04.544 net/af_packet: not in enabled drivers build config 00:02:04.544 net/af_xdp: not in enabled drivers build config 00:02:04.544 net/ark: not in enabled drivers build config 00:02:04.544 net/atlantic: not in enabled drivers build config 00:02:04.544 net/avp: not in enabled drivers build config 00:02:04.544 net/axgbe: not in enabled drivers build config 00:02:04.544 net/bnx2x: not in enabled drivers build config 00:02:04.544 net/bnxt: not in enabled drivers build config 00:02:04.544 net/bonding: not in enabled drivers build config 00:02:04.544 net/cnxk: not in enabled drivers build config 00:02:04.544 net/cpfl: not in enabled drivers build config 00:02:04.544 net/cxgbe: not in enabled drivers build config 00:02:04.544 net/dpaa: not in enabled drivers build config 00:02:04.544 net/dpaa2: not in enabled drivers build config 00:02:04.544 net/e1000: not in enabled drivers build config 00:02:04.544 net/ena: not in enabled drivers build config 00:02:04.544 net/enetc: not in enabled drivers build config 00:02:04.544 net/enetfec: not in enabled drivers build config 00:02:04.545 net/enic: not in enabled drivers build config 00:02:04.545 net/failsafe: not in enabled drivers build config 00:02:04.545 net/fm10k: not in enabled drivers build config 00:02:04.545 net/gve: not in enabled drivers build config 00:02:04.545 net/hinic: not in enabled drivers build config 00:02:04.545 net/hns3: not in enabled drivers build config 00:02:04.545 net/i40e: not in enabled drivers build config 00:02:04.545 net/iavf: not in enabled drivers build config 00:02:04.545 net/ice: not in enabled drivers build config 00:02:04.545 net/idpf: not in enabled drivers build config 00:02:04.545 net/igc: not in enabled drivers build config 00:02:04.545 net/ionic: not in enabled drivers build config 00:02:04.545 net/ipn3ke: not in enabled drivers build config 00:02:04.545 net/ixgbe: not in enabled drivers build config 00:02:04.545 net/mana: not in enabled drivers build config 00:02:04.545 net/memif: not in enabled drivers build config 00:02:04.545 net/mlx4: not in enabled drivers build config 00:02:04.545 net/mlx5: not in enabled drivers build config 00:02:04.545 net/mvneta: not in enabled drivers build config 00:02:04.545 net/mvpp2: not in enabled drivers build config 00:02:04.545 net/netvsc: not in enabled drivers build config 00:02:04.545 net/nfb: not in enabled drivers build config 00:02:04.545 net/nfp: not in enabled drivers build config 00:02:04.545 net/ngbe: not in enabled drivers build config 00:02:04.545 net/null: not in enabled drivers build config 00:02:04.545 net/octeontx: not in enabled drivers build config 00:02:04.545 net/octeon_ep: not in enabled drivers build config 00:02:04.545 net/pcap: not in enabled drivers build config 00:02:04.545 net/pfe: not in enabled drivers build config 00:02:04.545 net/qede: not in enabled drivers build config 00:02:04.545 net/ring: not in enabled drivers build config 00:02:04.545 net/sfc: not in enabled drivers build config 00:02:04.545 net/softnic: not in enabled drivers build config 00:02:04.545 net/tap: not in enabled drivers build config 00:02:04.545 net/thunderx: not in enabled drivers build config 00:02:04.545 net/txgbe: not in enabled drivers build config 00:02:04.545 net/vdev_netvsc: not in enabled drivers build config 00:02:04.545 net/vhost: not in enabled drivers build config 00:02:04.545 net/virtio: not in enabled drivers build config 00:02:04.545 net/vmxnet3: not in enabled drivers build config 00:02:04.545 raw/*: missing internal dependency, "rawdev" 00:02:04.545 crypto/armv8: not in enabled drivers build config 00:02:04.545 crypto/bcmfs: not in enabled drivers build config 00:02:04.545 crypto/caam_jr: not in enabled drivers build config 00:02:04.545 crypto/ccp: not in enabled drivers build config 00:02:04.545 crypto/cnxk: not in enabled drivers build config 00:02:04.545 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.545 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.545 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.545 crypto/mlx5: not in enabled drivers build config 00:02:04.545 crypto/mvsam: not in enabled drivers build config 00:02:04.545 crypto/nitrox: not in enabled drivers build config 00:02:04.545 crypto/null: not in enabled drivers build config 00:02:04.545 crypto/octeontx: not in enabled drivers build config 00:02:04.545 crypto/openssl: not in enabled drivers build config 00:02:04.545 crypto/scheduler: not in enabled drivers build config 00:02:04.545 crypto/uadk: not in enabled drivers build config 00:02:04.545 crypto/virtio: not in enabled drivers build config 00:02:04.545 compress/isal: not in enabled drivers build config 00:02:04.545 compress/mlx5: not in enabled drivers build config 00:02:04.545 compress/nitrox: not in enabled drivers build config 00:02:04.545 compress/octeontx: not in enabled drivers build config 00:02:04.545 compress/zlib: not in enabled drivers build config 00:02:04.545 regex/*: missing internal dependency, "regexdev" 00:02:04.545 ml/*: missing internal dependency, "mldev" 00:02:04.545 vdpa/ifc: not in enabled drivers build config 00:02:04.545 vdpa/mlx5: not in enabled drivers build config 00:02:04.545 vdpa/nfp: not in enabled drivers build config 00:02:04.545 vdpa/sfc: not in enabled drivers build config 00:02:04.545 event/*: missing internal dependency, "eventdev" 00:02:04.545 baseband/*: missing internal dependency, "bbdev" 00:02:04.545 gpu/*: missing internal dependency, "gpudev" 00:02:04.545 00:02:04.545 00:02:04.545 Build targets in project: 85 00:02:04.545 00:02:04.545 DPDK 24.03.0 00:02:04.545 00:02:04.545 User defined options 00:02:04.545 buildtype : debug 00:02:04.545 default_library : shared 00:02:04.545 libdir : lib 00:02:04.545 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:04.545 b_sanitize : address 00:02:04.545 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:04.545 c_link_args : 00:02:04.545 cpu_instruction_set: native 00:02:04.545 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:04.545 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:04.545 enable_docs : false 00:02:04.545 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:04.545 enable_kmods : false 00:02:04.545 max_lcores : 128 00:02:04.545 tests : false 00:02:04.545 00:02:04.545 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.545 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:04.545 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.545 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.545 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.545 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.545 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.545 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.545 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.545 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.545 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.545 [10/268] Linking static target lib/librte_kvargs.a 00:02:04.545 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.545 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.545 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.545 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.545 [15/268] Linking static target lib/librte_log.a 00:02:04.545 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.122 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.122 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.122 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.122 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.384 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.384 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.384 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.384 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.384 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.384 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.384 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.384 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.384 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.384 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.384 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.384 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.384 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.384 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.384 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.384 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.384 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.384 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.384 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.384 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.384 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.384 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.384 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.384 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.384 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.384 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.384 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.384 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.384 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.384 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.384 [51/268] Linking static target lib/librte_telemetry.a 00:02:05.384 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.384 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.642 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.642 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.642 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.642 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.642 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.642 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.642 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.642 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.642 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.642 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.903 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.903 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.903 [66/268] Linking target lib/librte_log.so.24.1 00:02:05.903 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.168 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.168 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.168 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.168 [71/268] Linking static target lib/librte_pci.a 00:02:06.168 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.168 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.168 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.168 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.168 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.168 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:06.430 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.430 [79/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.430 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.430 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.430 [82/268] Linking target lib/librte_kvargs.so.24.1 00:02:06.430 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.430 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.430 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.430 [86/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:06.430 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.430 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.430 [89/268] Linking static target lib/librte_ring.a 00:02:06.430 [90/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:06.430 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.430 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.430 [93/268] Linking static target lib/librte_meter.a 00:02:06.430 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.430 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.430 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.430 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.430 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.430 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.430 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.430 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.691 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:06.691 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.691 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.691 [105/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.691 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.691 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.691 [108/268] Linking target lib/librte_telemetry.so.24.1 00:02:06.691 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.692 [110/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.692 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.692 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.692 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:06.692 [114/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.692 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.692 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:06.692 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.692 [118/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.692 [119/268] Linking static target lib/librte_mempool.a 00:02:06.954 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.954 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.954 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:06.954 [123/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.954 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.954 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.954 [126/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.954 [127/268] Linking static target lib/librte_rcu.a 00:02:06.954 [128/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:06.954 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.954 [130/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.954 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:07.215 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.215 [133/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.215 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.215 [135/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.215 [136/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.215 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.215 [138/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.215 [139/268] Linking static target lib/librte_eal.a 00:02:07.476 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.476 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.476 [142/268] Linking static target lib/librte_cmdline.a 00:02:07.476 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.476 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.476 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.476 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.476 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.476 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.739 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.739 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.739 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.739 [152/268] Linking static target lib/librte_timer.a 00:02:07.739 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.739 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.739 [155/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.739 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.739 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.739 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.998 [159/268] Linking static target lib/librte_dmadev.a 00:02:07.998 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.998 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.998 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.998 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.998 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.998 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.998 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.255 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:08.255 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:08.255 [169/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.255 [170/268] Linking static target lib/librte_net.a 00:02:08.255 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:08.255 [172/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:08.255 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:08.255 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:08.255 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.255 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.516 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:08.516 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.516 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:08.516 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:08.516 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:08.516 [182/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.516 [183/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.516 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:08.516 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:08.516 [186/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:08.516 [187/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:08.516 [188/268] Linking static target lib/librte_power.a 00:02:08.516 [189/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.516 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.516 [191/268] Linking static target drivers/librte_bus_vdev.a 00:02:08.773 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.773 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.773 [194/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.773 [195/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.773 [196/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.773 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.773 [198/268] Linking static target drivers/librte_bus_pci.a 00:02:08.774 [199/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.774 [200/268] Linking static target lib/librte_hash.a 00:02:08.774 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:08.774 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.774 [203/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.774 [204/268] Linking static target lib/librte_compressdev.a 00:02:08.774 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.774 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.774 [207/268] Linking static target drivers/librte_mempool_ring.a 00:02:09.032 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.032 [209/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.032 [210/268] Linking static target lib/librte_reorder.a 00:02:09.032 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:09.290 [212/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.290 [213/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.290 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.290 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.914 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.914 [217/268] Linking static target lib/librte_security.a 00:02:09.914 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.914 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.848 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.848 [221/268] Linking static target lib/librte_mbuf.a 00:02:11.106 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.106 [223/268] Linking static target lib/librte_cryptodev.a 00:02:11.365 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.930 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.930 [226/268] Linking static target lib/librte_ethdev.a 00:02:12.189 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.563 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.563 [229/268] Linking target lib/librte_eal.so.24.1 00:02:13.563 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.563 [231/268] Linking target lib/librte_pci.so.24.1 00:02:13.563 [232/268] Linking target lib/librte_meter.so.24.1 00:02:13.563 [233/268] Linking target lib/librte_ring.so.24.1 00:02:13.563 [234/268] Linking target lib/librte_timer.so.24.1 00:02:13.563 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.563 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.822 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.822 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.822 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.822 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.822 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.822 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:13.822 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.822 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:13.822 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.822 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:14.080 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.081 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:14.081 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:14.081 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:14.081 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:14.081 [252/268] Linking target lib/librte_net.so.24.1 00:02:14.081 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:14.338 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.338 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.338 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:14.338 [257/268] Linking target lib/librte_hash.so.24.1 00:02:14.338 [258/268] Linking target lib/librte_security.so.24.1 00:02:14.596 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.162 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.536 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.536 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:16.536 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:16.536 [264/268] Linking target lib/librte_power.so.24.1 00:02:43.113 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.113 [266/268] Linking static target lib/librte_vhost.a 00:02:43.113 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.113 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:43.113 INFO: autodetecting backend as ninja 00:02:43.113 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:43.113 CC lib/ut/ut.o 00:02:43.113 CC lib/log/log.o 00:02:43.113 CC lib/log/log_flags.o 00:02:43.113 CC lib/log/log_deprecated.o 00:02:43.113 CC lib/ut_mock/mock.o 00:02:43.113 LIB libspdk_ut.a 00:02:43.113 LIB libspdk_ut_mock.a 00:02:43.113 LIB libspdk_log.a 00:02:43.113 SO libspdk_ut.so.2.0 00:02:43.113 SO libspdk_ut_mock.so.6.0 00:02:43.113 SO libspdk_log.so.7.1 00:02:43.113 SYMLINK libspdk_ut.so 00:02:43.113 SYMLINK libspdk_ut_mock.so 00:02:43.113 SYMLINK libspdk_log.so 00:02:43.113 CC lib/ioat/ioat.o 00:02:43.113 CXX lib/trace_parser/trace.o 00:02:43.113 CC lib/dma/dma.o 00:02:43.113 CC lib/util/base64.o 00:02:43.113 CC lib/util/bit_array.o 00:02:43.113 CC lib/util/cpuset.o 00:02:43.113 CC lib/util/crc16.o 00:02:43.113 CC lib/util/crc32.o 00:02:43.113 CC lib/util/crc32c.o 00:02:43.113 CC lib/util/crc32_ieee.o 00:02:43.113 CC lib/util/crc64.o 00:02:43.113 CC lib/util/dif.o 00:02:43.113 CC lib/util/fd.o 00:02:43.113 CC lib/util/fd_group.o 00:02:43.113 CC lib/util/file.o 00:02:43.113 CC lib/util/hexlify.o 00:02:43.113 CC lib/util/iov.o 00:02:43.113 CC lib/util/math.o 00:02:43.113 CC lib/util/net.o 00:02:43.113 CC lib/util/pipe.o 00:02:43.113 CC lib/util/strerror_tls.o 00:02:43.113 CC lib/util/uuid.o 00:02:43.113 CC lib/util/string.o 00:02:43.113 CC lib/util/xor.o 00:02:43.113 CC lib/util/zipf.o 00:02:43.113 CC lib/util/md5.o 00:02:43.113 CC lib/vfio_user/host/vfio_user.o 00:02:43.113 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.113 LIB libspdk_dma.a 00:02:43.113 SO libspdk_dma.so.5.0 00:02:43.113 SYMLINK libspdk_dma.so 00:02:43.113 LIB libspdk_ioat.a 00:02:43.113 SO libspdk_ioat.so.7.0 00:02:43.113 SYMLINK libspdk_ioat.so 00:02:43.113 LIB libspdk_vfio_user.a 00:02:43.113 SO libspdk_vfio_user.so.5.0 00:02:43.113 SYMLINK libspdk_vfio_user.so 00:02:43.113 LIB libspdk_util.a 00:02:43.113 SO libspdk_util.so.10.0 00:02:43.113 SYMLINK libspdk_util.so 00:02:43.113 CC lib/conf/conf.o 00:02:43.113 CC lib/rdma_utils/rdma_utils.o 00:02:43.113 CC lib/json/json_parse.o 00:02:43.113 CC lib/vmd/vmd.o 00:02:43.113 CC lib/rdma_provider/common.o 00:02:43.113 CC lib/env_dpdk/env.o 00:02:43.113 CC lib/idxd/idxd.o 00:02:43.113 CC lib/json/json_util.o 00:02:43.113 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:43.113 CC lib/idxd/idxd_user.o 00:02:43.113 CC lib/vmd/led.o 00:02:43.113 CC lib/env_dpdk/memory.o 00:02:43.113 CC lib/json/json_write.o 00:02:43.113 CC lib/idxd/idxd_kernel.o 00:02:43.113 CC lib/env_dpdk/pci.o 00:02:43.113 CC lib/env_dpdk/init.o 00:02:43.113 CC lib/env_dpdk/threads.o 00:02:43.113 CC lib/env_dpdk/pci_ioat.o 00:02:43.113 CC lib/env_dpdk/pci_virtio.o 00:02:43.114 CC lib/env_dpdk/pci_vmd.o 00:02:43.114 CC lib/env_dpdk/pci_idxd.o 00:02:43.114 CC lib/env_dpdk/sigbus_handler.o 00:02:43.114 CC lib/env_dpdk/pci_event.o 00:02:43.114 CC lib/env_dpdk/pci_dpdk.o 00:02:43.114 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.114 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.114 LIB libspdk_trace_parser.a 00:02:43.114 SO libspdk_trace_parser.so.6.0 00:02:43.114 LIB libspdk_rdma_provider.a 00:02:43.114 SYMLINK libspdk_trace_parser.so 00:02:43.114 SO libspdk_rdma_provider.so.6.0 00:02:43.114 LIB libspdk_conf.a 00:02:43.114 SO libspdk_conf.so.6.0 00:02:43.114 SYMLINK libspdk_rdma_provider.so 00:02:43.372 SYMLINK libspdk_conf.so 00:02:43.372 LIB libspdk_json.a 00:02:43.372 SO libspdk_json.so.6.0 00:02:43.372 LIB libspdk_rdma_utils.a 00:02:43.372 SO libspdk_rdma_utils.so.1.0 00:02:43.372 SYMLINK libspdk_json.so 00:02:43.372 SYMLINK libspdk_rdma_utils.so 00:02:43.630 CC lib/jsonrpc/jsonrpc_server.o 00:02:43.630 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:43.630 CC lib/jsonrpc/jsonrpc_client.o 00:02:43.630 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:43.888 LIB libspdk_vmd.a 00:02:43.888 SO libspdk_vmd.so.6.0 00:02:43.888 LIB libspdk_idxd.a 00:02:43.888 LIB libspdk_jsonrpc.a 00:02:43.888 SO libspdk_idxd.so.12.1 00:02:43.888 SO libspdk_jsonrpc.so.6.0 00:02:43.888 SYMLINK libspdk_vmd.so 00:02:43.888 SYMLINK libspdk_idxd.so 00:02:43.888 SYMLINK libspdk_jsonrpc.so 00:02:44.146 CC lib/rpc/rpc.o 00:02:44.405 LIB libspdk_rpc.a 00:02:44.405 SO libspdk_rpc.so.6.0 00:02:44.405 SYMLINK libspdk_rpc.so 00:02:44.664 CC lib/notify/notify.o 00:02:44.664 CC lib/keyring/keyring.o 00:02:44.664 CC lib/notify/notify_rpc.o 00:02:44.664 CC lib/keyring/keyring_rpc.o 00:02:44.664 CC lib/trace/trace.o 00:02:44.664 CC lib/trace/trace_flags.o 00:02:44.664 CC lib/trace/trace_rpc.o 00:02:44.664 LIB libspdk_notify.a 00:02:44.664 SO libspdk_notify.so.6.0 00:02:44.922 SYMLINK libspdk_notify.so 00:02:44.922 LIB libspdk_keyring.a 00:02:44.922 SO libspdk_keyring.so.2.0 00:02:44.922 LIB libspdk_trace.a 00:02:44.922 SO libspdk_trace.so.11.0 00:02:44.922 SYMLINK libspdk_keyring.so 00:02:44.922 SYMLINK libspdk_trace.so 00:02:45.181 CC lib/sock/sock.o 00:02:45.181 CC lib/sock/sock_rpc.o 00:02:45.181 CC lib/thread/thread.o 00:02:45.181 CC lib/thread/iobuf.o 00:02:45.748 LIB libspdk_sock.a 00:02:45.748 SO libspdk_sock.so.10.0 00:02:45.748 SYMLINK libspdk_sock.so 00:02:46.007 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.007 CC lib/nvme/nvme_ctrlr.o 00:02:46.007 CC lib/nvme/nvme_fabric.o 00:02:46.007 CC lib/nvme/nvme_ns_cmd.o 00:02:46.007 CC lib/nvme/nvme_ns.o 00:02:46.007 CC lib/nvme/nvme_pcie_common.o 00:02:46.007 CC lib/nvme/nvme_pcie.o 00:02:46.007 CC lib/nvme/nvme_qpair.o 00:02:46.007 CC lib/nvme/nvme.o 00:02:46.007 CC lib/nvme/nvme_quirks.o 00:02:46.007 CC lib/nvme/nvme_transport.o 00:02:46.007 CC lib/nvme/nvme_discovery.o 00:02:46.007 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.007 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:46.007 CC lib/nvme/nvme_tcp.o 00:02:46.007 CC lib/nvme/nvme_opal.o 00:02:46.007 CC lib/nvme/nvme_io_msg.o 00:02:46.007 CC lib/nvme/nvme_poll_group.o 00:02:46.007 CC lib/nvme/nvme_zns.o 00:02:46.007 CC lib/nvme/nvme_stubs.o 00:02:46.007 CC lib/nvme/nvme_auth.o 00:02:46.007 CC lib/nvme/nvme_cuse.o 00:02:46.007 CC lib/nvme/nvme_rdma.o 00:02:46.007 LIB libspdk_env_dpdk.a 00:02:46.265 SO libspdk_env_dpdk.so.15.0 00:02:46.265 SYMLINK libspdk_env_dpdk.so 00:02:47.199 LIB libspdk_thread.a 00:02:47.199 SO libspdk_thread.so.10.2 00:02:47.458 SYMLINK libspdk_thread.so 00:02:47.458 CC lib/accel/accel.o 00:02:47.458 CC lib/blob/blobstore.o 00:02:47.458 CC lib/init/json_config.o 00:02:47.458 CC lib/fsdev/fsdev.o 00:02:47.458 CC lib/accel/accel_rpc.o 00:02:47.458 CC lib/blob/request.o 00:02:47.458 CC lib/fsdev/fsdev_io.o 00:02:47.458 CC lib/init/subsystem.o 00:02:47.458 CC lib/accel/accel_sw.o 00:02:47.458 CC lib/virtio/virtio.o 00:02:47.458 CC lib/blob/zeroes.o 00:02:47.458 CC lib/fsdev/fsdev_rpc.o 00:02:47.458 CC lib/init/subsystem_rpc.o 00:02:47.458 CC lib/virtio/virtio_vhost_user.o 00:02:47.458 CC lib/blob/blob_bs_dev.o 00:02:47.458 CC lib/init/rpc.o 00:02:47.458 CC lib/virtio/virtio_vfio_user.o 00:02:47.458 CC lib/virtio/virtio_pci.o 00:02:47.716 LIB libspdk_init.a 00:02:47.975 SO libspdk_init.so.6.0 00:02:47.975 SYMLINK libspdk_init.so 00:02:47.975 LIB libspdk_virtio.a 00:02:47.975 SO libspdk_virtio.so.7.0 00:02:47.975 SYMLINK libspdk_virtio.so 00:02:47.975 CC lib/event/app.o 00:02:47.975 CC lib/event/reactor.o 00:02:47.975 CC lib/event/log_rpc.o 00:02:47.975 CC lib/event/app_rpc.o 00:02:47.975 CC lib/event/scheduler_static.o 00:02:48.541 LIB libspdk_fsdev.a 00:02:48.541 SO libspdk_fsdev.so.1.0 00:02:48.541 SYMLINK libspdk_fsdev.so 00:02:48.541 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:48.541 LIB libspdk_event.a 00:02:48.799 SO libspdk_event.so.15.0 00:02:48.799 SYMLINK libspdk_event.so 00:02:48.799 LIB libspdk_nvme.a 00:02:49.058 SO libspdk_nvme.so.14.0 00:02:49.058 LIB libspdk_accel.a 00:02:49.058 SO libspdk_accel.so.16.0 00:02:49.058 SYMLINK libspdk_accel.so 00:02:49.316 SYMLINK libspdk_nvme.so 00:02:49.316 CC lib/bdev/bdev.o 00:02:49.316 CC lib/bdev/bdev_rpc.o 00:02:49.316 CC lib/bdev/bdev_zone.o 00:02:49.316 CC lib/bdev/part.o 00:02:49.316 CC lib/bdev/scsi_nvme.o 00:02:49.576 LIB libspdk_fuse_dispatcher.a 00:02:49.576 SO libspdk_fuse_dispatcher.so.1.0 00:02:49.576 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.108 LIB libspdk_blob.a 00:02:52.108 SO libspdk_blob.so.11.0 00:02:52.108 SYMLINK libspdk_blob.so 00:02:52.108 CC lib/blobfs/blobfs.o 00:02:52.108 CC lib/blobfs/tree.o 00:02:52.108 CC lib/lvol/lvol.o 00:02:52.675 LIB libspdk_bdev.a 00:02:52.675 SO libspdk_bdev.so.17.0 00:02:52.935 SYMLINK libspdk_bdev.so 00:02:52.935 CC lib/ublk/ublk.o 00:02:52.935 CC lib/ftl/ftl_core.o 00:02:52.935 CC lib/nbd/nbd.o 00:02:52.935 CC lib/scsi/dev.o 00:02:52.935 CC lib/ublk/ublk_rpc.o 00:02:52.935 CC lib/ftl/ftl_init.o 00:02:52.935 CC lib/scsi/lun.o 00:02:52.935 CC lib/nbd/nbd_rpc.o 00:02:52.935 CC lib/nvmf/ctrlr.o 00:02:52.935 CC lib/ftl/ftl_layout.o 00:02:52.935 CC lib/ftl/ftl_debug.o 00:02:52.935 CC lib/nvmf/ctrlr_discovery.o 00:02:52.935 CC lib/nvmf/ctrlr_bdev.o 00:02:52.935 CC lib/scsi/port.o 00:02:52.935 CC lib/ftl/ftl_io.o 00:02:52.935 CC lib/nvmf/subsystem.o 00:02:52.935 CC lib/scsi/scsi.o 00:02:52.935 CC lib/ftl/ftl_sb.o 00:02:52.935 CC lib/nvmf/nvmf.o 00:02:52.935 CC lib/scsi/scsi_bdev.o 00:02:52.935 CC lib/scsi/scsi_pr.o 00:02:52.935 CC lib/nvmf/nvmf_rpc.o 00:02:52.935 CC lib/ftl/ftl_l2p.o 00:02:52.935 CC lib/nvmf/transport.o 00:02:52.935 CC lib/scsi/scsi_rpc.o 00:02:52.935 CC lib/ftl/ftl_l2p_flat.o 00:02:52.935 CC lib/scsi/task.o 00:02:52.935 CC lib/nvmf/stubs.o 00:02:52.935 CC lib/ftl/ftl_nv_cache.o 00:02:52.935 CC lib/nvmf/tcp.o 00:02:52.935 CC lib/nvmf/mdns_server.o 00:02:52.935 CC lib/ftl/ftl_band.o 00:02:52.935 CC lib/nvmf/rdma.o 00:02:52.935 CC lib/ftl/ftl_band_ops.o 00:02:52.935 CC lib/ftl/ftl_writer.o 00:02:52.935 CC lib/nvmf/auth.o 00:02:52.935 CC lib/ftl/ftl_rq.o 00:02:52.935 CC lib/ftl/ftl_reloc.o 00:02:53.200 CC lib/ftl/ftl_l2p_cache.o 00:02:53.200 CC lib/ftl/ftl_p2l_log.o 00:02:53.200 CC lib/ftl/ftl_p2l.o 00:02:53.200 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.200 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.200 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.200 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.200 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.200 LIB libspdk_blobfs.a 00:02:53.200 SO libspdk_blobfs.so.10.0 00:02:53.460 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.460 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.460 SYMLINK libspdk_blobfs.so 00:02:53.460 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.460 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.460 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.460 LIB libspdk_lvol.a 00:02:53.460 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.460 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.460 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.460 CC lib/ftl/utils/ftl_conf.o 00:02:53.460 SO libspdk_lvol.so.10.0 00:02:53.460 CC lib/ftl/utils/ftl_md.o 00:02:53.460 CC lib/ftl/utils/ftl_mempool.o 00:02:53.460 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.460 CC lib/ftl/utils/ftl_property.o 00:02:53.460 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.724 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.724 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.724 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.724 SYMLINK libspdk_lvol.so 00:02:53.724 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.724 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.724 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.724 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.724 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.724 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.724 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.725 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:53.988 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:53.988 CC lib/ftl/base/ftl_base_dev.o 00:02:53.988 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.988 CC lib/ftl/ftl_trace.o 00:02:53.988 LIB libspdk_nbd.a 00:02:54.246 SO libspdk_nbd.so.7.0 00:02:54.246 SYMLINK libspdk_nbd.so 00:02:54.246 LIB libspdk_scsi.a 00:02:54.246 SO libspdk_scsi.so.9.0 00:02:54.505 SYMLINK libspdk_scsi.so 00:02:54.505 LIB libspdk_ublk.a 00:02:54.505 SO libspdk_ublk.so.3.0 00:02:54.505 SYMLINK libspdk_ublk.so 00:02:54.505 CC lib/iscsi/conn.o 00:02:54.505 CC lib/vhost/vhost.o 00:02:54.505 CC lib/iscsi/init_grp.o 00:02:54.505 CC lib/vhost/vhost_rpc.o 00:02:54.505 CC lib/iscsi/iscsi.o 00:02:54.505 CC lib/vhost/vhost_scsi.o 00:02:54.505 CC lib/iscsi/param.o 00:02:54.505 CC lib/iscsi/portal_grp.o 00:02:54.505 CC lib/vhost/vhost_blk.o 00:02:54.505 CC lib/iscsi/tgt_node.o 00:02:54.505 CC lib/vhost/rte_vhost_user.o 00:02:54.505 CC lib/iscsi/iscsi_subsystem.o 00:02:54.505 CC lib/iscsi/iscsi_rpc.o 00:02:54.505 CC lib/iscsi/task.o 00:02:54.764 LIB libspdk_ftl.a 00:02:55.022 SO libspdk_ftl.so.9.0 00:02:55.281 SYMLINK libspdk_ftl.so 00:02:55.848 LIB libspdk_vhost.a 00:02:56.106 SO libspdk_vhost.so.8.0 00:02:56.106 SYMLINK libspdk_vhost.so 00:02:56.364 LIB libspdk_iscsi.a 00:02:56.364 LIB libspdk_nvmf.a 00:02:56.623 SO libspdk_iscsi.so.8.0 00:02:56.623 SO libspdk_nvmf.so.19.0 00:02:56.623 SYMLINK libspdk_iscsi.so 00:02:56.881 SYMLINK libspdk_nvmf.so 00:02:57.141 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.141 CC module/accel/error/accel_error.o 00:02:57.141 CC module/fsdev/aio/fsdev_aio.o 00:02:57.141 CC module/accel/error/accel_error_rpc.o 00:02:57.141 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.141 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.141 CC module/blob/bdev/blob_bdev.o 00:02:57.141 CC module/keyring/file/keyring.o 00:02:57.141 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.141 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.141 CC module/accel/dsa/accel_dsa.o 00:02:57.141 CC module/keyring/linux/keyring.o 00:02:57.141 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.141 CC module/keyring/file/keyring_rpc.o 00:02:57.141 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.141 CC module/keyring/linux/keyring_rpc.o 00:02:57.141 CC module/sock/posix/posix.o 00:02:57.141 CC module/accel/iaa/accel_iaa.o 00:02:57.141 CC module/accel/ioat/accel_ioat.o 00:02:57.141 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.141 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.141 LIB libspdk_env_dpdk_rpc.a 00:02:57.141 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.399 LIB libspdk_keyring_linux.a 00:02:57.399 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.399 LIB libspdk_scheduler_gscheduler.a 00:02:57.399 LIB libspdk_keyring_file.a 00:02:57.399 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.399 SO libspdk_keyring_linux.so.1.0 00:02:57.399 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.399 SO libspdk_keyring_file.so.2.0 00:02:57.399 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.399 LIB libspdk_accel_error.a 00:02:57.399 LIB libspdk_accel_ioat.a 00:02:57.399 SO libspdk_accel_error.so.2.0 00:02:57.399 LIB libspdk_scheduler_dynamic.a 00:02:57.399 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.399 LIB libspdk_accel_iaa.a 00:02:57.399 SYMLINK libspdk_keyring_linux.so 00:02:57.399 SYMLINK libspdk_keyring_file.so 00:02:57.399 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.399 SO libspdk_accel_ioat.so.6.0 00:02:57.399 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.399 SO libspdk_accel_iaa.so.3.0 00:02:57.399 SYMLINK libspdk_accel_error.so 00:02:57.399 SYMLINK libspdk_accel_ioat.so 00:02:57.399 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.399 SYMLINK libspdk_accel_iaa.so 00:02:57.399 LIB libspdk_blob_bdev.a 00:02:57.657 LIB libspdk_accel_dsa.a 00:02:57.657 SO libspdk_blob_bdev.so.11.0 00:02:57.657 SO libspdk_accel_dsa.so.5.0 00:02:57.657 SYMLINK libspdk_blob_bdev.so 00:02:57.657 SYMLINK libspdk_accel_dsa.so 00:02:57.918 CC module/bdev/error/vbdev_error.o 00:02:57.918 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.918 CC module/bdev/delay/vbdev_delay.o 00:02:57.918 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.918 CC module/bdev/gpt/gpt.o 00:02:57.918 CC module/bdev/null/bdev_null.o 00:02:57.918 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.918 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.918 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.918 CC module/bdev/null/bdev_null_rpc.o 00:02:57.918 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.918 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.918 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.918 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.918 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.918 CC module/bdev/aio/bdev_aio.o 00:02:57.918 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.918 CC module/bdev/malloc/bdev_malloc.o 00:02:57.918 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.918 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.918 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.918 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.918 CC module/bdev/split/vbdev_split.o 00:02:57.918 CC module/bdev/nvme/bdev_nvme.o 00:02:57.918 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.918 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.918 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.918 CC module/bdev/ftl/bdev_ftl.o 00:02:57.918 CC module/bdev/nvme/nvme_rpc.o 00:02:57.918 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.918 CC module/bdev/raid/bdev_raid.o 00:02:57.918 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.918 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.918 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.918 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.918 CC module/bdev/nvme/vbdev_opal.o 00:02:57.918 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.918 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.918 CC module/bdev/raid/raid0.o 00:02:57.918 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.918 CC module/bdev/raid/raid1.o 00:02:57.918 CC module/bdev/raid/concat.o 00:02:58.176 LIB libspdk_blobfs_bdev.a 00:02:58.176 SO libspdk_blobfs_bdev.so.6.0 00:02:58.176 LIB libspdk_fsdev_aio.a 00:02:58.176 SYMLINK libspdk_blobfs_bdev.so 00:02:58.176 SO libspdk_fsdev_aio.so.1.0 00:02:58.434 LIB libspdk_bdev_split.a 00:02:58.434 SO libspdk_bdev_split.so.6.0 00:02:58.434 LIB libspdk_bdev_null.a 00:02:58.434 SYMLINK libspdk_fsdev_aio.so 00:02:58.434 LIB libspdk_sock_posix.a 00:02:58.434 SO libspdk_bdev_null.so.6.0 00:02:58.434 LIB libspdk_bdev_ftl.a 00:02:58.434 SO libspdk_sock_posix.so.6.0 00:02:58.434 LIB libspdk_bdev_error.a 00:02:58.434 SYMLINK libspdk_bdev_split.so 00:02:58.434 SO libspdk_bdev_ftl.so.6.0 00:02:58.434 LIB libspdk_bdev_gpt.a 00:02:58.434 SO libspdk_bdev_error.so.6.0 00:02:58.434 SYMLINK libspdk_bdev_null.so 00:02:58.434 LIB libspdk_bdev_aio.a 00:02:58.434 SO libspdk_bdev_gpt.so.6.0 00:02:58.434 LIB libspdk_bdev_passthru.a 00:02:58.434 SYMLINK libspdk_sock_posix.so 00:02:58.434 SO libspdk_bdev_aio.so.6.0 00:02:58.434 SYMLINK libspdk_bdev_ftl.so 00:02:58.434 SYMLINK libspdk_bdev_error.so 00:02:58.434 LIB libspdk_bdev_iscsi.a 00:02:58.434 SO libspdk_bdev_passthru.so.6.0 00:02:58.434 SYMLINK libspdk_bdev_gpt.so 00:02:58.434 SO libspdk_bdev_iscsi.so.6.0 00:02:58.434 SYMLINK libspdk_bdev_aio.so 00:02:58.434 LIB libspdk_bdev_malloc.a 00:02:58.692 SYMLINK libspdk_bdev_passthru.so 00:02:58.692 SO libspdk_bdev_malloc.so.6.0 00:02:58.692 LIB libspdk_bdev_zone_block.a 00:02:58.692 SYMLINK libspdk_bdev_iscsi.so 00:02:58.692 SO libspdk_bdev_zone_block.so.6.0 00:02:58.692 LIB libspdk_bdev_delay.a 00:02:58.692 SYMLINK libspdk_bdev_malloc.so 00:02:58.692 SO libspdk_bdev_delay.so.6.0 00:02:58.692 SYMLINK libspdk_bdev_zone_block.so 00:02:58.692 SYMLINK libspdk_bdev_delay.so 00:02:58.692 LIB libspdk_bdev_virtio.a 00:02:58.692 SO libspdk_bdev_virtio.so.6.0 00:02:58.692 LIB libspdk_bdev_lvol.a 00:02:58.692 SO libspdk_bdev_lvol.so.6.0 00:02:58.951 SYMLINK libspdk_bdev_virtio.so 00:02:58.951 SYMLINK libspdk_bdev_lvol.so 00:02:59.517 LIB libspdk_bdev_raid.a 00:02:59.517 SO libspdk_bdev_raid.so.6.0 00:02:59.517 SYMLINK libspdk_bdev_raid.so 00:03:01.416 LIB libspdk_bdev_nvme.a 00:03:01.416 SO libspdk_bdev_nvme.so.7.0 00:03:01.416 SYMLINK libspdk_bdev_nvme.so 00:03:01.675 CC module/event/subsystems/iobuf/iobuf.o 00:03:01.675 CC module/event/subsystems/keyring/keyring.o 00:03:01.675 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:01.675 CC module/event/subsystems/vmd/vmd.o 00:03:01.675 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:01.675 CC module/event/subsystems/fsdev/fsdev.o 00:03:01.675 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:01.675 CC module/event/subsystems/scheduler/scheduler.o 00:03:01.675 CC module/event/subsystems/sock/sock.o 00:03:01.675 LIB libspdk_event_keyring.a 00:03:01.675 LIB libspdk_event_vhost_blk.a 00:03:01.675 LIB libspdk_event_fsdev.a 00:03:01.675 LIB libspdk_event_scheduler.a 00:03:01.675 LIB libspdk_event_sock.a 00:03:01.675 LIB libspdk_event_vmd.a 00:03:01.675 SO libspdk_event_keyring.so.1.0 00:03:01.675 SO libspdk_event_vhost_blk.so.3.0 00:03:01.675 SO libspdk_event_fsdev.so.1.0 00:03:01.675 LIB libspdk_event_iobuf.a 00:03:01.675 SO libspdk_event_scheduler.so.4.0 00:03:01.675 SO libspdk_event_sock.so.5.0 00:03:01.675 SO libspdk_event_vmd.so.6.0 00:03:01.675 SO libspdk_event_iobuf.so.3.0 00:03:01.675 SYMLINK libspdk_event_keyring.so 00:03:01.675 SYMLINK libspdk_event_vhost_blk.so 00:03:01.675 SYMLINK libspdk_event_fsdev.so 00:03:01.675 SYMLINK libspdk_event_scheduler.so 00:03:01.675 SYMLINK libspdk_event_sock.so 00:03:01.675 SYMLINK libspdk_event_vmd.so 00:03:01.933 SYMLINK libspdk_event_iobuf.so 00:03:01.933 CC module/event/subsystems/accel/accel.o 00:03:02.193 LIB libspdk_event_accel.a 00:03:02.193 SO libspdk_event_accel.so.6.0 00:03:02.193 SYMLINK libspdk_event_accel.so 00:03:02.451 CC module/event/subsystems/bdev/bdev.o 00:03:02.451 LIB libspdk_event_bdev.a 00:03:02.709 SO libspdk_event_bdev.so.6.0 00:03:02.709 SYMLINK libspdk_event_bdev.so 00:03:02.709 CC module/event/subsystems/ublk/ublk.o 00:03:02.709 CC module/event/subsystems/scsi/scsi.o 00:03:02.709 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.709 CC module/event/subsystems/nbd/nbd.o 00:03:02.709 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.967 LIB libspdk_event_nbd.a 00:03:02.967 LIB libspdk_event_ublk.a 00:03:02.967 LIB libspdk_event_scsi.a 00:03:02.967 SO libspdk_event_ublk.so.3.0 00:03:02.967 SO libspdk_event_nbd.so.6.0 00:03:02.967 SO libspdk_event_scsi.so.6.0 00:03:02.967 SYMLINK libspdk_event_nbd.so 00:03:02.967 SYMLINK libspdk_event_ublk.so 00:03:02.967 SYMLINK libspdk_event_scsi.so 00:03:02.967 LIB libspdk_event_nvmf.a 00:03:02.967 SO libspdk_event_nvmf.so.6.0 00:03:03.225 SYMLINK libspdk_event_nvmf.so 00:03:03.225 CC module/event/subsystems/iscsi/iscsi.o 00:03:03.225 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:03.225 LIB libspdk_event_vhost_scsi.a 00:03:03.516 LIB libspdk_event_iscsi.a 00:03:03.516 SO libspdk_event_vhost_scsi.so.3.0 00:03:03.516 SO libspdk_event_iscsi.so.6.0 00:03:03.516 SYMLINK libspdk_event_vhost_scsi.so 00:03:03.516 SYMLINK libspdk_event_iscsi.so 00:03:03.516 SO libspdk.so.6.0 00:03:03.516 SYMLINK libspdk.so 00:03:03.808 CC app/trace_record/trace_record.o 00:03:03.808 CC app/spdk_lspci/spdk_lspci.o 00:03:03.808 CXX app/trace/trace.o 00:03:03.808 CC app/spdk_top/spdk_top.o 00:03:03.808 CC app/spdk_nvme_perf/perf.o 00:03:03.808 CC app/spdk_nvme_identify/identify.o 00:03:03.808 TEST_HEADER include/spdk/accel.h 00:03:03.808 TEST_HEADER include/spdk/accel_module.h 00:03:03.808 TEST_HEADER include/spdk/assert.h 00:03:03.808 CC test/rpc_client/rpc_client_test.o 00:03:03.808 TEST_HEADER include/spdk/barrier.h 00:03:03.808 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.808 TEST_HEADER include/spdk/base64.h 00:03:03.808 TEST_HEADER include/spdk/bdev.h 00:03:03.808 TEST_HEADER include/spdk/bdev_module.h 00:03:03.808 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.808 TEST_HEADER include/spdk/bit_array.h 00:03:03.808 TEST_HEADER include/spdk/bit_pool.h 00:03:03.808 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.808 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.808 TEST_HEADER include/spdk/blobfs.h 00:03:03.808 TEST_HEADER include/spdk/blob.h 00:03:03.808 TEST_HEADER include/spdk/conf.h 00:03:03.808 TEST_HEADER include/spdk/config.h 00:03:03.808 TEST_HEADER include/spdk/cpuset.h 00:03:03.808 TEST_HEADER include/spdk/crc16.h 00:03:03.808 TEST_HEADER include/spdk/crc32.h 00:03:03.808 TEST_HEADER include/spdk/crc64.h 00:03:03.808 TEST_HEADER include/spdk/dif.h 00:03:03.808 TEST_HEADER include/spdk/dma.h 00:03:03.808 TEST_HEADER include/spdk/endian.h 00:03:03.808 TEST_HEADER include/spdk/env.h 00:03:03.808 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.808 TEST_HEADER include/spdk/event.h 00:03:03.808 TEST_HEADER include/spdk/fd.h 00:03:03.808 TEST_HEADER include/spdk/fd_group.h 00:03:03.808 TEST_HEADER include/spdk/file.h 00:03:03.808 TEST_HEADER include/spdk/fsdev.h 00:03:03.808 TEST_HEADER include/spdk/fsdev_module.h 00:03:03.808 TEST_HEADER include/spdk/ftl.h 00:03:03.808 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:03.808 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.808 TEST_HEADER include/spdk/hexlify.h 00:03:03.808 TEST_HEADER include/spdk/histogram_data.h 00:03:03.808 TEST_HEADER include/spdk/idxd.h 00:03:03.808 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.808 TEST_HEADER include/spdk/init.h 00:03:03.808 TEST_HEADER include/spdk/ioat.h 00:03:03.808 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.808 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.808 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.808 TEST_HEADER include/spdk/json.h 00:03:03.808 TEST_HEADER include/spdk/keyring.h 00:03:03.808 TEST_HEADER include/spdk/keyring_module.h 00:03:03.808 TEST_HEADER include/spdk/likely.h 00:03:03.808 TEST_HEADER include/spdk/log.h 00:03:03.808 TEST_HEADER include/spdk/lvol.h 00:03:03.808 TEST_HEADER include/spdk/md5.h 00:03:03.808 TEST_HEADER include/spdk/memory.h 00:03:03.808 TEST_HEADER include/spdk/mmio.h 00:03:03.808 TEST_HEADER include/spdk/nbd.h 00:03:03.808 TEST_HEADER include/spdk/net.h 00:03:03.808 TEST_HEADER include/spdk/notify.h 00:03:03.808 TEST_HEADER include/spdk/nvme.h 00:03:03.808 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.808 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.808 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.809 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.809 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.809 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.809 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.809 TEST_HEADER include/spdk/nvmf.h 00:03:03.809 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.809 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.809 TEST_HEADER include/spdk/opal.h 00:03:03.809 TEST_HEADER include/spdk/opal_spec.h 00:03:03.809 TEST_HEADER include/spdk/pci_ids.h 00:03:03.809 TEST_HEADER include/spdk/pipe.h 00:03:03.809 TEST_HEADER include/spdk/queue.h 00:03:03.809 TEST_HEADER include/spdk/reduce.h 00:03:03.809 TEST_HEADER include/spdk/rpc.h 00:03:03.809 TEST_HEADER include/spdk/scheduler.h 00:03:03.809 TEST_HEADER include/spdk/scsi.h 00:03:03.809 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.809 TEST_HEADER include/spdk/sock.h 00:03:03.809 TEST_HEADER include/spdk/stdinc.h 00:03:03.809 TEST_HEADER include/spdk/thread.h 00:03:03.809 TEST_HEADER include/spdk/string.h 00:03:03.809 TEST_HEADER include/spdk/trace.h 00:03:03.809 TEST_HEADER include/spdk/trace_parser.h 00:03:03.809 TEST_HEADER include/spdk/tree.h 00:03:03.809 TEST_HEADER include/spdk/ublk.h 00:03:03.809 TEST_HEADER include/spdk/util.h 00:03:03.809 TEST_HEADER include/spdk/uuid.h 00:03:03.809 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.809 TEST_HEADER include/spdk/version.h 00:03:03.809 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.809 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.809 TEST_HEADER include/spdk/vhost.h 00:03:03.809 TEST_HEADER include/spdk/vmd.h 00:03:03.809 TEST_HEADER include/spdk/xor.h 00:03:03.809 TEST_HEADER include/spdk/zipf.h 00:03:03.809 CXX test/cpp_headers/accel.o 00:03:03.809 CXX test/cpp_headers/accel_module.o 00:03:03.809 CXX test/cpp_headers/assert.o 00:03:03.809 CXX test/cpp_headers/barrier.o 00:03:03.809 CXX test/cpp_headers/base64.o 00:03:03.809 CXX test/cpp_headers/bdev.o 00:03:03.809 CXX test/cpp_headers/bdev_module.o 00:03:03.809 CXX test/cpp_headers/bdev_zone.o 00:03:03.809 CC app/nvmf_tgt/nvmf_main.o 00:03:03.809 CXX test/cpp_headers/bit_array.o 00:03:03.809 CXX test/cpp_headers/bit_pool.o 00:03:03.809 CXX test/cpp_headers/blob_bdev.o 00:03:03.809 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.809 CXX test/cpp_headers/blobfs.o 00:03:03.809 CXX test/cpp_headers/blob.o 00:03:03.809 CC app/spdk_dd/spdk_dd.o 00:03:03.809 CC app/iscsi_tgt/iscsi_tgt.o 00:03:03.809 CXX test/cpp_headers/conf.o 00:03:03.809 CXX test/cpp_headers/config.o 00:03:03.809 CXX test/cpp_headers/cpuset.o 00:03:03.809 CXX test/cpp_headers/crc16.o 00:03:03.809 CC app/spdk_tgt/spdk_tgt.o 00:03:03.809 CXX test/cpp_headers/crc32.o 00:03:03.809 CC test/thread/poller_perf/poller_perf.o 00:03:03.809 CC test/app/jsoncat/jsoncat.o 00:03:03.809 CC examples/ioat/verify/verify.o 00:03:03.809 CC examples/util/zipf/zipf.o 00:03:03.809 CC examples/ioat/perf/perf.o 00:03:03.809 CC test/app/stub/stub.o 00:03:03.809 CC test/app/histogram_perf/histogram_perf.o 00:03:03.809 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:03.809 CC test/env/vtophys/vtophys.o 00:03:03.809 CC test/env/pci/pci_ut.o 00:03:03.809 CC app/fio/nvme/fio_plugin.o 00:03:03.809 CC test/env/memory/memory_ut.o 00:03:04.077 CC test/app/bdev_svc/bdev_svc.o 00:03:04.077 CC test/dma/test_dma/test_dma.o 00:03:04.077 CC app/fio/bdev/fio_plugin.o 00:03:04.077 LINK spdk_lspci 00:03:04.077 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.077 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.077 LINK rpc_client_test 00:03:04.339 LINK spdk_nvme_discover 00:03:04.339 LINK jsoncat 00:03:04.339 LINK interrupt_tgt 00:03:04.339 LINK poller_perf 00:03:04.339 LINK histogram_perf 00:03:04.339 LINK nvmf_tgt 00:03:04.339 LINK vtophys 00:03:04.339 CXX test/cpp_headers/crc64.o 00:03:04.339 LINK zipf 00:03:04.339 CXX test/cpp_headers/dif.o 00:03:04.339 CXX test/cpp_headers/dma.o 00:03:04.339 CXX test/cpp_headers/endian.o 00:03:04.339 CXX test/cpp_headers/env_dpdk.o 00:03:04.339 CXX test/cpp_headers/env.o 00:03:04.339 LINK env_dpdk_post_init 00:03:04.339 LINK iscsi_tgt 00:03:04.339 CXX test/cpp_headers/event.o 00:03:04.339 CXX test/cpp_headers/fd_group.o 00:03:04.339 CXX test/cpp_headers/fd.o 00:03:04.339 CXX test/cpp_headers/file.o 00:03:04.339 CXX test/cpp_headers/fsdev.o 00:03:04.339 CXX test/cpp_headers/fsdev_module.o 00:03:04.339 LINK spdk_tgt 00:03:04.339 LINK stub 00:03:04.339 CXX test/cpp_headers/ftl.o 00:03:04.339 CXX test/cpp_headers/fuse_dispatcher.o 00:03:04.339 LINK bdev_svc 00:03:04.339 LINK spdk_trace_record 00:03:04.339 CXX test/cpp_headers/gpt_spec.o 00:03:04.339 CXX test/cpp_headers/hexlify.o 00:03:04.339 CXX test/cpp_headers/histogram_data.o 00:03:04.339 LINK verify 00:03:04.339 LINK ioat_perf 00:03:04.339 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.339 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.603 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.603 CXX test/cpp_headers/idxd.o 00:03:04.603 CXX test/cpp_headers/idxd_spec.o 00:03:04.603 CXX test/cpp_headers/init.o 00:03:04.603 CXX test/cpp_headers/ioat.o 00:03:04.603 CXX test/cpp_headers/ioat_spec.o 00:03:04.603 LINK spdk_dd 00:03:04.603 CXX test/cpp_headers/iscsi_spec.o 00:03:04.603 CXX test/cpp_headers/json.o 00:03:04.603 CXX test/cpp_headers/jsonrpc.o 00:03:04.603 CXX test/cpp_headers/keyring.o 00:03:04.603 CXX test/cpp_headers/keyring_module.o 00:03:04.603 CXX test/cpp_headers/likely.o 00:03:04.603 CXX test/cpp_headers/log.o 00:03:04.603 LINK spdk_trace 00:03:04.603 CXX test/cpp_headers/lvol.o 00:03:04.603 CXX test/cpp_headers/md5.o 00:03:04.603 CXX test/cpp_headers/memory.o 00:03:04.867 CXX test/cpp_headers/mmio.o 00:03:04.867 CXX test/cpp_headers/nbd.o 00:03:04.867 CXX test/cpp_headers/net.o 00:03:04.867 CXX test/cpp_headers/notify.o 00:03:04.867 CXX test/cpp_headers/nvme.o 00:03:04.867 CXX test/cpp_headers/nvme_intel.o 00:03:04.867 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.867 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.867 CXX test/cpp_headers/nvme_spec.o 00:03:04.867 CXX test/cpp_headers/nvme_zns.o 00:03:04.867 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.867 CC test/event/event_perf/event_perf.o 00:03:04.867 CC test/event/reactor/reactor.o 00:03:04.867 LINK pci_ut 00:03:04.867 CC test/event/reactor_perf/reactor_perf.o 00:03:04.867 CC test/event/app_repeat/app_repeat.o 00:03:04.867 CC examples/sock/hello_world/hello_sock.o 00:03:04.867 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.867 CC examples/thread/thread/thread_ex.o 00:03:04.867 CXX test/cpp_headers/nvmf.o 00:03:04.867 CXX test/cpp_headers/nvmf_spec.o 00:03:04.867 CC test/event/scheduler/scheduler.o 00:03:05.129 CC examples/idxd/perf/perf.o 00:03:05.129 CC examples/vmd/lsvmd/lsvmd.o 00:03:05.129 CXX test/cpp_headers/nvmf_transport.o 00:03:05.129 CXX test/cpp_headers/opal.o 00:03:05.129 CXX test/cpp_headers/opal_spec.o 00:03:05.129 CC examples/vmd/led/led.o 00:03:05.129 CXX test/cpp_headers/pci_ids.o 00:03:05.129 CXX test/cpp_headers/pipe.o 00:03:05.129 CXX test/cpp_headers/queue.o 00:03:05.129 CXX test/cpp_headers/reduce.o 00:03:05.129 CXX test/cpp_headers/rpc.o 00:03:05.129 LINK test_dma 00:03:05.129 CXX test/cpp_headers/scheduler.o 00:03:05.129 CXX test/cpp_headers/scsi.o 00:03:05.129 CXX test/cpp_headers/scsi_spec.o 00:03:05.129 LINK nvme_fuzz 00:03:05.129 CXX test/cpp_headers/sock.o 00:03:05.129 CXX test/cpp_headers/stdinc.o 00:03:05.129 LINK reactor 00:03:05.129 LINK spdk_bdev 00:03:05.129 CXX test/cpp_headers/string.o 00:03:05.129 CXX test/cpp_headers/thread.o 00:03:05.129 CXX test/cpp_headers/trace.o 00:03:05.129 LINK reactor_perf 00:03:05.129 LINK event_perf 00:03:05.129 CXX test/cpp_headers/trace_parser.o 00:03:05.129 CXX test/cpp_headers/tree.o 00:03:05.129 CXX test/cpp_headers/ublk.o 00:03:05.389 CXX test/cpp_headers/util.o 00:03:05.389 LINK app_repeat 00:03:05.389 CXX test/cpp_headers/uuid.o 00:03:05.389 LINK lsvmd 00:03:05.389 LINK spdk_nvme 00:03:05.389 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.389 CXX test/cpp_headers/version.o 00:03:05.389 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.389 CC app/vhost/vhost.o 00:03:05.389 LINK mem_callbacks 00:03:05.389 CXX test/cpp_headers/vhost.o 00:03:05.389 CXX test/cpp_headers/vmd.o 00:03:05.389 CXX test/cpp_headers/xor.o 00:03:05.389 CXX test/cpp_headers/zipf.o 00:03:05.389 LINK led 00:03:05.389 LINK vhost_fuzz 00:03:05.647 LINK scheduler 00:03:05.647 LINK thread 00:03:05.647 LINK hello_sock 00:03:05.647 LINK vhost 00:03:05.647 LINK spdk_nvme_perf 00:03:05.647 LINK idxd_perf 00:03:05.647 CC test/nvme/reset/reset.o 00:03:05.647 CC test/nvme/simple_copy/simple_copy.o 00:03:05.647 CC test/nvme/e2edp/nvme_dp.o 00:03:05.647 CC test/nvme/sgl/sgl.o 00:03:05.647 CC test/nvme/fdp/fdp.o 00:03:05.647 CC test/nvme/fused_ordering/fused_ordering.o 00:03:05.647 CC test/nvme/aer/aer.o 00:03:05.647 CC test/nvme/overhead/overhead.o 00:03:05.906 CC test/nvme/connect_stress/connect_stress.o 00:03:05.906 CC test/nvme/err_injection/err_injection.o 00:03:05.906 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:05.906 CC test/nvme/boot_partition/boot_partition.o 00:03:05.906 CC test/nvme/reserve/reserve.o 00:03:05.906 CC test/nvme/cuse/cuse.o 00:03:05.906 CC test/nvme/startup/startup.o 00:03:05.906 LINK spdk_nvme_identify 00:03:05.906 CC test/nvme/compliance/nvme_compliance.o 00:03:05.906 CC test/accel/dif/dif.o 00:03:05.906 LINK spdk_top 00:03:05.906 CC test/blobfs/mkfs/mkfs.o 00:03:05.906 CC test/lvol/esnap/esnap.o 00:03:06.164 LINK startup 00:03:06.164 CC examples/nvme/reconnect/reconnect.o 00:03:06.164 CC examples/nvme/hotplug/hotplug.o 00:03:06.164 CC examples/nvme/arbitration/arbitration.o 00:03:06.164 CC examples/nvme/hello_world/hello_world.o 00:03:06.164 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.164 CC examples/nvme/abort/abort.o 00:03:06.164 LINK doorbell_aers 00:03:06.164 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.164 LINK err_injection 00:03:06.164 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.164 LINK reserve 00:03:06.164 LINK fused_ordering 00:03:06.164 CC examples/accel/perf/accel_perf.o 00:03:06.164 LINK boot_partition 00:03:06.164 LINK connect_stress 00:03:06.164 CC examples/blob/hello_world/hello_blob.o 00:03:06.164 CC examples/blob/cli/blobcli.o 00:03:06.164 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:06.164 LINK reset 00:03:06.164 LINK nvme_dp 00:03:06.164 LINK aer 00:03:06.164 LINK mkfs 00:03:06.164 LINK fdp 00:03:06.422 LINK simple_copy 00:03:06.423 LINK overhead 00:03:06.423 LINK pmr_persistence 00:03:06.423 LINK cmb_copy 00:03:06.423 LINK sgl 00:03:06.423 LINK memory_ut 00:03:06.423 LINK nvme_compliance 00:03:06.423 LINK hello_world 00:03:06.423 LINK hello_blob 00:03:06.423 LINK hotplug 00:03:06.423 LINK reconnect 00:03:06.680 LINK abort 00:03:06.680 LINK arbitration 00:03:06.680 LINK hello_fsdev 00:03:06.680 LINK nvme_manage 00:03:06.938 LINK accel_perf 00:03:06.938 LINK blobcli 00:03:07.196 LINK dif 00:03:07.196 CC examples/bdev/hello_world/hello_bdev.o 00:03:07.196 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.455 CC test/bdev/bdevio/bdevio.o 00:03:07.455 LINK hello_bdev 00:03:07.713 LINK iscsi_fuzz 00:03:07.713 LINK cuse 00:03:07.971 LINK bdevio 00:03:08.229 LINK bdevperf 00:03:08.486 CC examples/nvmf/nvmf/nvmf.o 00:03:09.051 LINK nvmf 00:03:13.235 LINK esnap 00:03:13.235 00:03:13.235 real 1m18.905s 00:03:13.235 user 13m6.808s 00:03:13.235 sys 2m34.954s 00:03:13.235 19:33:02 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:13.235 19:33:02 make -- common/autotest_common.sh@10 -- $ set +x 00:03:13.235 ************************************ 00:03:13.235 END TEST make 00:03:13.235 ************************************ 00:03:13.235 19:33:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:13.235 19:33:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:13.235 19:33:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:13.235 19:33:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.235 19:33:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:13.235 19:33:02 -- pm/common@44 -- $ pid=2763891 00:03:13.235 19:33:02 -- pm/common@50 -- $ kill -TERM 2763891 00:03:13.235 19:33:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.235 19:33:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:13.235 19:33:02 -- pm/common@44 -- $ pid=2763893 00:03:13.235 19:33:02 -- pm/common@50 -- $ kill -TERM 2763893 00:03:13.235 19:33:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.235 19:33:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:13.235 19:33:02 -- pm/common@44 -- $ pid=2763895 00:03:13.235 19:33:02 -- pm/common@50 -- $ kill -TERM 2763895 00:03:13.235 19:33:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.235 19:33:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:13.235 19:33:02 -- pm/common@44 -- $ pid=2763923 00:03:13.235 19:33:02 -- pm/common@50 -- $ sudo -E kill -TERM 2763923 00:03:13.235 19:33:02 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:13.235 19:33:02 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:13.235 19:33:02 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:13.235 19:33:03 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:13.235 19:33:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:13.235 19:33:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:13.494 19:33:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:13.494 19:33:03 -- scripts/common.sh@336 -- # IFS=.-: 00:03:13.494 19:33:03 -- scripts/common.sh@336 -- # read -ra ver1 00:03:13.494 19:33:03 -- scripts/common.sh@337 -- # IFS=.-: 00:03:13.494 19:33:03 -- scripts/common.sh@337 -- # read -ra ver2 00:03:13.494 19:33:03 -- scripts/common.sh@338 -- # local 'op=<' 00:03:13.494 19:33:03 -- scripts/common.sh@340 -- # ver1_l=2 00:03:13.494 19:33:03 -- scripts/common.sh@341 -- # ver2_l=1 00:03:13.494 19:33:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:13.494 19:33:03 -- scripts/common.sh@344 -- # case "$op" in 00:03:13.494 19:33:03 -- scripts/common.sh@345 -- # : 1 00:03:13.494 19:33:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:13.494 19:33:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:13.494 19:33:03 -- scripts/common.sh@365 -- # decimal 1 00:03:13.494 19:33:03 -- scripts/common.sh@353 -- # local d=1 00:03:13.494 19:33:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:13.494 19:33:03 -- scripts/common.sh@355 -- # echo 1 00:03:13.494 19:33:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:13.494 19:33:03 -- scripts/common.sh@366 -- # decimal 2 00:03:13.494 19:33:03 -- scripts/common.sh@353 -- # local d=2 00:03:13.494 19:33:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:13.494 19:33:03 -- scripts/common.sh@355 -- # echo 2 00:03:13.494 19:33:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:13.494 19:33:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:13.494 19:33:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:13.495 19:33:03 -- scripts/common.sh@368 -- # return 0 00:03:13.495 19:33:03 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:13.495 19:33:03 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:13.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.495 --rc genhtml_branch_coverage=1 00:03:13.495 --rc genhtml_function_coverage=1 00:03:13.495 --rc genhtml_legend=1 00:03:13.495 --rc geninfo_all_blocks=1 00:03:13.495 --rc geninfo_unexecuted_blocks=1 00:03:13.495 00:03:13.495 ' 00:03:13.495 19:33:03 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:13.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.495 --rc genhtml_branch_coverage=1 00:03:13.495 --rc genhtml_function_coverage=1 00:03:13.495 --rc genhtml_legend=1 00:03:13.495 --rc geninfo_all_blocks=1 00:03:13.495 --rc geninfo_unexecuted_blocks=1 00:03:13.495 00:03:13.495 ' 00:03:13.495 19:33:03 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:13.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.495 --rc genhtml_branch_coverage=1 00:03:13.495 --rc genhtml_function_coverage=1 00:03:13.495 --rc genhtml_legend=1 00:03:13.495 --rc geninfo_all_blocks=1 00:03:13.495 --rc geninfo_unexecuted_blocks=1 00:03:13.495 00:03:13.495 ' 00:03:13.495 19:33:03 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:13.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.495 --rc genhtml_branch_coverage=1 00:03:13.495 --rc genhtml_function_coverage=1 00:03:13.495 --rc genhtml_legend=1 00:03:13.495 --rc geninfo_all_blocks=1 00:03:13.495 --rc geninfo_unexecuted_blocks=1 00:03:13.495 00:03:13.495 ' 00:03:13.495 19:33:03 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:13.495 19:33:03 -- nvmf/common.sh@7 -- # uname -s 00:03:13.495 19:33:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:13.495 19:33:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:13.495 19:33:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:13.495 19:33:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:13.495 19:33:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:13.495 19:33:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:13.495 19:33:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:13.495 19:33:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:13.495 19:33:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:13.495 19:33:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:13.495 19:33:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:13.495 19:33:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:13.495 19:33:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:13.495 19:33:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:13.495 19:33:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:13.495 19:33:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:13.495 19:33:03 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:13.495 19:33:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:13.495 19:33:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:13.495 19:33:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:13.495 19:33:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:13.495 19:33:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.495 19:33:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.495 19:33:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.495 19:33:03 -- paths/export.sh@5 -- # export PATH 00:03:13.495 19:33:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.495 19:33:03 -- nvmf/common.sh@51 -- # : 0 00:03:13.495 19:33:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:13.495 19:33:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:13.495 19:33:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:13.495 19:33:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:13.495 19:33:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:13.495 19:33:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:13.495 19:33:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:13.495 19:33:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:13.495 19:33:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:13.495 19:33:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:13.495 19:33:03 -- spdk/autotest.sh@32 -- # uname -s 00:03:13.495 19:33:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:13.495 19:33:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:13.495 19:33:03 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:13.495 19:33:03 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:13.495 19:33:03 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:13.495 19:33:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:13.495 19:33:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:13.495 19:33:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:13.495 19:33:03 -- spdk/autotest.sh@48 -- # udevadm_pid=2824083 00:03:13.495 19:33:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:13.495 19:33:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:13.495 19:33:03 -- pm/common@17 -- # local monitor 00:03:13.495 19:33:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.495 19:33:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.495 19:33:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.495 19:33:03 -- pm/common@21 -- # date +%s 00:03:13.495 19:33:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.495 19:33:03 -- pm/common@21 -- # date +%s 00:03:13.495 19:33:03 -- pm/common@25 -- # sleep 1 00:03:13.495 19:33:03 -- pm/common@21 -- # date +%s 00:03:13.495 19:33:03 -- pm/common@21 -- # date +%s 00:03:13.495 19:33:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728840783 00:03:13.495 19:33:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728840783 00:03:13.495 19:33:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728840783 00:03:13.495 19:33:03 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728840783 00:03:13.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728840783_collect-vmstat.pm.log 00:03:13.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728840783_collect-cpu-load.pm.log 00:03:13.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728840783_collect-cpu-temp.pm.log 00:03:13.495 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728840783_collect-bmc-pm.bmc.pm.log 00:03:14.430 19:33:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:14.430 19:33:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:14.430 19:33:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:14.430 19:33:04 -- common/autotest_common.sh@10 -- # set +x 00:03:14.430 19:33:04 -- spdk/autotest.sh@59 -- # create_test_list 00:03:14.430 19:33:04 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:14.430 19:33:04 -- common/autotest_common.sh@10 -- # set +x 00:03:14.430 19:33:04 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:14.430 19:33:04 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:14.430 19:33:04 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:14.430 19:33:04 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:14.430 19:33:04 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:14.430 19:33:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:14.430 19:33:04 -- common/autotest_common.sh@1455 -- # uname 00:03:14.430 19:33:04 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:14.430 19:33:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:14.430 19:33:04 -- common/autotest_common.sh@1475 -- # uname 00:03:14.430 19:33:04 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:14.430 19:33:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:14.430 19:33:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:14.430 lcov: LCOV version 1.15 00:03:14.430 19:33:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:32.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:50.582 19:33:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:50.582 19:33:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.582 19:33:39 -- common/autotest_common.sh@10 -- # set +x 00:03:50.582 19:33:39 -- spdk/autotest.sh@78 -- # rm -f 00:03:50.582 19:33:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.148 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:51.148 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:51.406 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:51.406 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:51.406 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:51.406 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:51.406 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:51.406 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:51.406 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:51.406 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:51.406 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:51.406 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:51.406 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:51.406 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:51.406 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:51.406 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:51.406 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:51.664 19:33:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:51.664 19:33:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:51.664 19:33:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:51.664 19:33:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:51.664 19:33:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:51.664 19:33:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:51.664 19:33:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:51.664 19:33:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.664 19:33:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:51.664 19:33:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:51.664 19:33:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.664 19:33:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:51.664 19:33:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:51.664 19:33:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:51.664 19:33:41 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:51.664 No valid GPT data, bailing 00:03:51.664 19:33:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.664 19:33:41 -- scripts/common.sh@394 -- # pt= 00:03:51.664 19:33:41 -- scripts/common.sh@395 -- # return 1 00:03:51.664 19:33:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:51.664 1+0 records in 00:03:51.664 1+0 records out 00:03:51.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00169215 s, 620 MB/s 00:03:51.664 19:33:41 -- spdk/autotest.sh@105 -- # sync 00:03:51.664 19:33:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.664 19:33:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.664 19:33:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:53.565 19:33:43 -- spdk/autotest.sh@111 -- # uname -s 00:03:53.565 19:33:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:53.565 19:33:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:53.565 19:33:43 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:54.972 Hugepages 00:03:54.972 node hugesize free / total 00:03:54.972 node0 1048576kB 0 / 0 00:03:54.972 node0 2048kB 0 / 0 00:03:54.972 node1 1048576kB 0 / 0 00:03:54.972 node1 2048kB 0 / 0 00:03:54.972 00:03:54.972 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.972 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:54.972 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:54.972 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:54.972 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:54.972 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:54.972 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:54.972 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:54.972 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:54.972 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:54.972 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:54.972 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:54.972 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:54.972 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:54.972 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:54.972 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:54.972 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:54.972 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:54.972 19:33:44 -- spdk/autotest.sh@117 -- # uname -s 00:03:54.972 19:33:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:54.972 19:33:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:54.972 19:33:44 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.351 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.351 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.351 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.351 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.351 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.351 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.351 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.351 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.351 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.351 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.351 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.351 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.351 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.351 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.351 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.351 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.290 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.290 19:33:47 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:58.667 19:33:48 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:58.667 19:33:48 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:58.667 19:33:48 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:58.667 19:33:48 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:58.667 19:33:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:58.667 19:33:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:58.667 19:33:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.667 19:33:48 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:58.667 19:33:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:58.667 19:33:48 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:58.667 19:33:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:03:58.667 19:33:48 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.603 Waiting for block devices as requested 00:03:59.603 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:59.603 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:59.864 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:59.864 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:59.864 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:00.157 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:00.157 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:00.157 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:00.157 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:00.157 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:00.443 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:00.443 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:00.443 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:00.443 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:00.702 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:00.702 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:00.702 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:00.961 19:33:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:00.961 19:33:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:00.961 19:33:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:00.961 19:33:50 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:04:00.961 19:33:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:00.961 19:33:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:00.961 19:33:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:00.961 19:33:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:00.961 19:33:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:00.961 19:33:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:00.961 19:33:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:00.961 19:33:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:00.961 19:33:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:00.961 19:33:50 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:00.961 19:33:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:00.961 19:33:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:00.961 19:33:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:00.961 19:33:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:00.961 19:33:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:00.961 19:33:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:00.961 19:33:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:00.961 19:33:50 -- common/autotest_common.sh@1541 -- # continue 00:04:00.961 19:33:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:00.961 19:33:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.961 19:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:00.961 19:33:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:00.961 19:33:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.961 19:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:00.961 19:33:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.335 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:02.336 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:02.336 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:02.336 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:02.336 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:02.336 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:02.336 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:02.336 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:02.336 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:02.336 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:02.336 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:02.336 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:02.336 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:02.336 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:02.336 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:02.336 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:03.274 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.274 19:33:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.274 19:33:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.274 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:04:03.274 19:33:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.274 19:33:53 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:03.274 19:33:53 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.274 19:33:53 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:03.274 19:33:53 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:03.274 19:33:53 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:03.274 19:33:53 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.274 19:33:53 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:03.274 19:33:53 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:03.274 19:33:53 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:03.274 19:33:53 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.274 19:33:53 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:03.274 19:33:53 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:03.532 19:33:53 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:03.532 19:33:53 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:04:03.532 19:33:53 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:03.532 19:33:53 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:03.532 19:33:53 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:03.532 19:33:53 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:03.532 19:33:53 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:03.532 19:33:53 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:03.532 19:33:53 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:04:03.532 19:33:53 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:04:03.532 19:33:53 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2834812 00:04:03.532 19:33:53 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.532 19:33:53 -- common/autotest_common.sh@1583 -- # waitforlisten 2834812 00:04:03.532 19:33:53 -- common/autotest_common.sh@831 -- # '[' -z 2834812 ']' 00:04:03.532 19:33:53 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.532 19:33:53 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:03.532 19:33:53 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.532 19:33:53 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:03.532 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:04:03.532 [2024-10-13 19:33:53.255698] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:03.532 [2024-10-13 19:33:53.255864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834812 ] 00:04:03.791 [2024-10-13 19:33:53.391575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.791 [2024-10-13 19:33:53.530099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.725 19:33:54 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:04.725 19:33:54 -- common/autotest_common.sh@864 -- # return 0 00:04:04.725 19:33:54 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:04.725 19:33:54 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:04.725 19:33:54 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:08.007 nvme0n1 00:04:08.007 19:33:57 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:08.264 [2024-10-13 19:33:57.894345] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:08.264 [2024-10-13 19:33:57.894433] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:08.264 request: 00:04:08.264 { 00:04:08.264 "nvme_ctrlr_name": "nvme0", 00:04:08.264 "password": "test", 00:04:08.264 "method": "bdev_nvme_opal_revert", 00:04:08.264 "req_id": 1 00:04:08.264 } 00:04:08.264 Got JSON-RPC error response 00:04:08.264 response: 00:04:08.264 { 00:04:08.264 "code": -32603, 00:04:08.264 "message": "Internal error" 00:04:08.264 } 00:04:08.264 19:33:57 -- common/autotest_common.sh@1589 -- # true 00:04:08.264 19:33:57 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:08.264 19:33:57 -- common/autotest_common.sh@1593 -- # killprocess 2834812 00:04:08.264 19:33:57 -- common/autotest_common.sh@950 -- # '[' -z 2834812 ']' 00:04:08.264 19:33:57 -- common/autotest_common.sh@954 -- # kill -0 2834812 00:04:08.264 19:33:57 -- common/autotest_common.sh@955 -- # uname 00:04:08.264 19:33:57 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.264 19:33:57 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2834812 00:04:08.264 19:33:57 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.264 19:33:57 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.264 19:33:57 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2834812' 00:04:08.264 killing process with pid 2834812 00:04:08.264 19:33:57 -- common/autotest_common.sh@969 -- # kill 2834812 00:04:08.264 19:33:57 -- common/autotest_common.sh@974 -- # wait 2834812 00:04:12.446 19:34:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:12.446 19:34:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:12.446 19:34:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.446 19:34:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.446 19:34:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:12.446 19:34:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.446 19:34:01 -- common/autotest_common.sh@10 -- # set +x 00:04:12.446 19:34:01 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:12.446 19:34:01 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:12.446 19:34:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.446 19:34:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.446 19:34:01 -- common/autotest_common.sh@10 -- # set +x 00:04:12.446 ************************************ 00:04:12.446 START TEST env 00:04:12.446 ************************************ 00:04:12.446 19:34:01 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:12.446 * Looking for test storage... 00:04:12.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:12.446 19:34:01 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:12.446 19:34:01 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:12.446 19:34:01 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:12.446 19:34:01 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:12.446 19:34:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.446 19:34:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.446 19:34:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.446 19:34:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.446 19:34:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.447 19:34:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.447 19:34:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.447 19:34:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.447 19:34:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.447 19:34:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.447 19:34:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.447 19:34:01 env -- scripts/common.sh@344 -- # case "$op" in 00:04:12.447 19:34:01 env -- scripts/common.sh@345 -- # : 1 00:04:12.447 19:34:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.447 19:34:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.447 19:34:01 env -- scripts/common.sh@365 -- # decimal 1 00:04:12.447 19:34:01 env -- scripts/common.sh@353 -- # local d=1 00:04:12.447 19:34:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.447 19:34:01 env -- scripts/common.sh@355 -- # echo 1 00:04:12.447 19:34:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.447 19:34:01 env -- scripts/common.sh@366 -- # decimal 2 00:04:12.447 19:34:01 env -- scripts/common.sh@353 -- # local d=2 00:04:12.447 19:34:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.447 19:34:01 env -- scripts/common.sh@355 -- # echo 2 00:04:12.447 19:34:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.447 19:34:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.447 19:34:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.447 19:34:01 env -- scripts/common.sh@368 -- # return 0 00:04:12.447 19:34:01 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.447 19:34:01 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.447 --rc genhtml_branch_coverage=1 00:04:12.447 --rc genhtml_function_coverage=1 00:04:12.447 --rc genhtml_legend=1 00:04:12.447 --rc geninfo_all_blocks=1 00:04:12.447 --rc geninfo_unexecuted_blocks=1 00:04:12.447 00:04:12.447 ' 00:04:12.447 19:34:01 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.447 --rc genhtml_branch_coverage=1 00:04:12.447 --rc genhtml_function_coverage=1 00:04:12.447 --rc genhtml_legend=1 00:04:12.447 --rc geninfo_all_blocks=1 00:04:12.447 --rc geninfo_unexecuted_blocks=1 00:04:12.447 00:04:12.447 ' 00:04:12.447 19:34:01 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.447 --rc genhtml_branch_coverage=1 00:04:12.447 --rc genhtml_function_coverage=1 00:04:12.447 --rc genhtml_legend=1 00:04:12.447 --rc geninfo_all_blocks=1 00:04:12.447 --rc geninfo_unexecuted_blocks=1 00:04:12.447 00:04:12.447 ' 00:04:12.447 19:34:01 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.447 --rc genhtml_branch_coverage=1 00:04:12.447 --rc genhtml_function_coverage=1 00:04:12.447 --rc genhtml_legend=1 00:04:12.447 --rc geninfo_all_blocks=1 00:04:12.447 --rc geninfo_unexecuted_blocks=1 00:04:12.447 00:04:12.447 ' 00:04:12.447 19:34:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:12.447 19:34:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.447 19:34:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.447 19:34:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.447 ************************************ 00:04:12.447 START TEST env_memory 00:04:12.447 ************************************ 00:04:12.447 19:34:01 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:12.447 00:04:12.447 00:04:12.447 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.447 http://cunit.sourceforge.net/ 00:04:12.447 00:04:12.447 00:04:12.447 Suite: memory 00:04:12.447 Test: alloc and free memory map ...[2024-10-13 19:34:01.893111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:12.447 passed 00:04:12.447 Test: mem map translation ...[2024-10-13 19:34:01.938862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:12.447 [2024-10-13 19:34:01.938903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:12.447 [2024-10-13 19:34:01.938987] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:12.447 [2024-10-13 19:34:01.939012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:12.447 passed 00:04:12.447 Test: mem map registration ...[2024-10-13 19:34:02.007058] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:12.447 [2024-10-13 19:34:02.007102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:12.447 passed 00:04:12.447 Test: mem map adjacent registrations ...passed 00:04:12.447 00:04:12.447 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.447 suites 1 1 n/a 0 0 00:04:12.447 tests 4 4 4 0 0 00:04:12.447 asserts 152 152 152 0 n/a 00:04:12.447 00:04:12.447 Elapsed time = 0.240 seconds 00:04:12.447 00:04:12.447 real 0m0.260s 00:04:12.447 user 0m0.243s 00:04:12.447 sys 0m0.016s 00:04:12.447 19:34:02 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.447 19:34:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:12.447 ************************************ 00:04:12.447 END TEST env_memory 00:04:12.447 ************************************ 00:04:12.447 19:34:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:12.447 19:34:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.447 19:34:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.447 19:34:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.447 ************************************ 00:04:12.447 START TEST env_vtophys 00:04:12.447 ************************************ 00:04:12.447 19:34:02 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:12.447 EAL: lib.eal log level changed from notice to debug 00:04:12.447 EAL: Detected lcore 0 as core 0 on socket 0 00:04:12.447 EAL: Detected lcore 1 as core 1 on socket 0 00:04:12.447 EAL: Detected lcore 2 as core 2 on socket 0 00:04:12.447 EAL: Detected lcore 3 as core 3 on socket 0 00:04:12.447 EAL: Detected lcore 4 as core 4 on socket 0 00:04:12.447 EAL: Detected lcore 5 as core 5 on socket 0 00:04:12.447 EAL: Detected lcore 6 as core 8 on socket 0 00:04:12.447 EAL: Detected lcore 7 as core 9 on socket 0 00:04:12.447 EAL: Detected lcore 8 as core 10 on socket 0 00:04:12.447 EAL: Detected lcore 9 as core 11 on socket 0 00:04:12.447 EAL: Detected lcore 10 as core 12 on socket 0 00:04:12.447 EAL: Detected lcore 11 as core 13 on socket 0 00:04:12.447 EAL: Detected lcore 12 as core 0 on socket 1 00:04:12.447 EAL: Detected lcore 13 as core 1 on socket 1 00:04:12.447 EAL: Detected lcore 14 as core 2 on socket 1 00:04:12.447 EAL: Detected lcore 15 as core 3 on socket 1 00:04:12.447 EAL: Detected lcore 16 as core 4 on socket 1 00:04:12.447 EAL: Detected lcore 17 as core 5 on socket 1 00:04:12.447 EAL: Detected lcore 18 as core 8 on socket 1 00:04:12.447 EAL: Detected lcore 19 as core 9 on socket 1 00:04:12.447 EAL: Detected lcore 20 as core 10 on socket 1 00:04:12.447 EAL: Detected lcore 21 as core 11 on socket 1 00:04:12.447 EAL: Detected lcore 22 as core 12 on socket 1 00:04:12.447 EAL: Detected lcore 23 as core 13 on socket 1 00:04:12.447 EAL: Detected lcore 24 as core 0 on socket 0 00:04:12.447 EAL: Detected lcore 25 as core 1 on socket 0 00:04:12.447 EAL: Detected lcore 26 as core 2 on socket 0 00:04:12.447 EAL: Detected lcore 27 as core 3 on socket 0 00:04:12.447 EAL: Detected lcore 28 as core 4 on socket 0 00:04:12.447 EAL: Detected lcore 29 as core 5 on socket 0 00:04:12.447 EAL: Detected lcore 30 as core 8 on socket 0 00:04:12.447 EAL: Detected lcore 31 as core 9 on socket 0 00:04:12.447 EAL: Detected lcore 32 as core 10 on socket 0 00:04:12.447 EAL: Detected lcore 33 as core 11 on socket 0 00:04:12.447 EAL: Detected lcore 34 as core 12 on socket 0 00:04:12.447 EAL: Detected lcore 35 as core 13 on socket 0 00:04:12.447 EAL: Detected lcore 36 as core 0 on socket 1 00:04:12.447 EAL: Detected lcore 37 as core 1 on socket 1 00:04:12.447 EAL: Detected lcore 38 as core 2 on socket 1 00:04:12.447 EAL: Detected lcore 39 as core 3 on socket 1 00:04:12.447 EAL: Detected lcore 40 as core 4 on socket 1 00:04:12.447 EAL: Detected lcore 41 as core 5 on socket 1 00:04:12.447 EAL: Detected lcore 42 as core 8 on socket 1 00:04:12.447 EAL: Detected lcore 43 as core 9 on socket 1 00:04:12.447 EAL: Detected lcore 44 as core 10 on socket 1 00:04:12.447 EAL: Detected lcore 45 as core 11 on socket 1 00:04:12.447 EAL: Detected lcore 46 as core 12 on socket 1 00:04:12.447 EAL: Detected lcore 47 as core 13 on socket 1 00:04:12.447 EAL: Maximum logical cores by configuration: 128 00:04:12.447 EAL: Detected CPU lcores: 48 00:04:12.447 EAL: Detected NUMA nodes: 2 00:04:12.447 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:12.447 EAL: Detected shared linkage of DPDK 00:04:12.447 EAL: No shared files mode enabled, IPC will be disabled 00:04:12.447 EAL: Bus pci wants IOVA as 'DC' 00:04:12.447 EAL: Buses did not request a specific IOVA mode. 00:04:12.447 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:12.447 EAL: Selected IOVA mode 'VA' 00:04:12.447 EAL: Probing VFIO support... 00:04:12.447 EAL: IOMMU type 1 (Type 1) is supported 00:04:12.447 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:12.447 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:12.447 EAL: VFIO support initialized 00:04:12.447 EAL: Ask a virtual area of 0x2e000 bytes 00:04:12.447 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:12.447 EAL: Setting up physically contiguous memory... 00:04:12.447 EAL: Setting maximum number of open files to 524288 00:04:12.447 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:12.447 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:12.447 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:12.447 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.447 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:12.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.447 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.448 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:12.448 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:12.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.448 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:12.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.448 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:12.448 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:12.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.448 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:12.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.448 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:12.448 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:12.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.448 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:12.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.448 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:12.448 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:12.448 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:12.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.448 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:12.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:12.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.448 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:12.448 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:12.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.448 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:12.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:12.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.448 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:12.448 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:12.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.448 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:12.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:12.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.448 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:12.448 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:12.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.448 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:12.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:12.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.448 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:12.448 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:12.448 EAL: Hugepages will be freed exactly as allocated. 00:04:12.448 EAL: No shared files mode enabled, IPC is disabled 00:04:12.448 EAL: No shared files mode enabled, IPC is disabled 00:04:12.448 EAL: TSC frequency is ~2700000 KHz 00:04:12.448 EAL: Main lcore 0 is ready (tid=7f357695ca40;cpuset=[0]) 00:04:12.448 EAL: Trying to obtain current memory policy. 00:04:12.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.448 EAL: Restoring previous memory policy: 0 00:04:12.448 EAL: request: mp_malloc_sync 00:04:12.448 EAL: No shared files mode enabled, IPC is disabled 00:04:12.448 EAL: Heap on socket 0 was expanded by 2MB 00:04:12.448 EAL: No shared files mode enabled, IPC is disabled 00:04:12.706 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:12.706 EAL: Mem event callback 'spdk:(nil)' registered 00:04:12.706 00:04:12.706 00:04:12.706 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.706 http://cunit.sourceforge.net/ 00:04:12.706 00:04:12.706 00:04:12.706 Suite: components_suite 00:04:12.964 Test: vtophys_malloc_test ...passed 00:04:12.964 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:12.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.964 EAL: Restoring previous memory policy: 4 00:04:12.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.964 EAL: request: mp_malloc_sync 00:04:12.964 EAL: No shared files mode enabled, IPC is disabled 00:04:12.964 EAL: Heap on socket 0 was expanded by 4MB 00:04:12.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.964 EAL: request: mp_malloc_sync 00:04:12.964 EAL: No shared files mode enabled, IPC is disabled 00:04:12.964 EAL: Heap on socket 0 was shrunk by 4MB 00:04:12.964 EAL: Trying to obtain current memory policy. 00:04:12.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.964 EAL: Restoring previous memory policy: 4 00:04:12.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.964 EAL: request: mp_malloc_sync 00:04:12.964 EAL: No shared files mode enabled, IPC is disabled 00:04:12.964 EAL: Heap on socket 0 was expanded by 6MB 00:04:12.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.964 EAL: request: mp_malloc_sync 00:04:12.964 EAL: No shared files mode enabled, IPC is disabled 00:04:12.964 EAL: Heap on socket 0 was shrunk by 6MB 00:04:12.964 EAL: Trying to obtain current memory policy. 00:04:12.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.964 EAL: Restoring previous memory policy: 4 00:04:12.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.964 EAL: request: mp_malloc_sync 00:04:12.964 EAL: No shared files mode enabled, IPC is disabled 00:04:12.964 EAL: Heap on socket 0 was expanded by 10MB 00:04:12.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.964 EAL: request: mp_malloc_sync 00:04:12.964 EAL: No shared files mode enabled, IPC is disabled 00:04:12.964 EAL: Heap on socket 0 was shrunk by 10MB 00:04:12.964 EAL: Trying to obtain current memory policy. 00:04:12.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.222 EAL: Restoring previous memory policy: 4 00:04:13.222 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.222 EAL: request: mp_malloc_sync 00:04:13.223 EAL: No shared files mode enabled, IPC is disabled 00:04:13.223 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.223 EAL: request: mp_malloc_sync 00:04:13.223 EAL: No shared files mode enabled, IPC is disabled 00:04:13.223 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.223 EAL: Trying to obtain current memory policy. 00:04:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.223 EAL: Restoring previous memory policy: 4 00:04:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.223 EAL: request: mp_malloc_sync 00:04:13.223 EAL: No shared files mode enabled, IPC is disabled 00:04:13.223 EAL: Heap on socket 0 was expanded by 34MB 00:04:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.223 EAL: request: mp_malloc_sync 00:04:13.223 EAL: No shared files mode enabled, IPC is disabled 00:04:13.223 EAL: Heap on socket 0 was shrunk by 34MB 00:04:13.223 EAL: Trying to obtain current memory policy. 00:04:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.223 EAL: Restoring previous memory policy: 4 00:04:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.223 EAL: request: mp_malloc_sync 00:04:13.223 EAL: No shared files mode enabled, IPC is disabled 00:04:13.223 EAL: Heap on socket 0 was expanded by 66MB 00:04:13.481 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.481 EAL: request: mp_malloc_sync 00:04:13.481 EAL: No shared files mode enabled, IPC is disabled 00:04:13.481 EAL: Heap on socket 0 was shrunk by 66MB 00:04:13.481 EAL: Trying to obtain current memory policy. 00:04:13.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.481 EAL: Restoring previous memory policy: 4 00:04:13.481 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.481 EAL: request: mp_malloc_sync 00:04:13.481 EAL: No shared files mode enabled, IPC is disabled 00:04:13.481 EAL: Heap on socket 0 was expanded by 130MB 00:04:13.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.739 EAL: request: mp_malloc_sync 00:04:13.739 EAL: No shared files mode enabled, IPC is disabled 00:04:13.739 EAL: Heap on socket 0 was shrunk by 130MB 00:04:13.997 EAL: Trying to obtain current memory policy. 00:04:13.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.997 EAL: Restoring previous memory policy: 4 00:04:13.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.997 EAL: request: mp_malloc_sync 00:04:13.997 EAL: No shared files mode enabled, IPC is disabled 00:04:13.997 EAL: Heap on socket 0 was expanded by 258MB 00:04:14.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.563 EAL: request: mp_malloc_sync 00:04:14.563 EAL: No shared files mode enabled, IPC is disabled 00:04:14.563 EAL: Heap on socket 0 was shrunk by 258MB 00:04:15.128 EAL: Trying to obtain current memory policy. 00:04:15.128 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.128 EAL: Restoring previous memory policy: 4 00:04:15.128 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.128 EAL: request: mp_malloc_sync 00:04:15.128 EAL: No shared files mode enabled, IPC is disabled 00:04:15.128 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.061 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.319 EAL: request: mp_malloc_sync 00:04:16.319 EAL: No shared files mode enabled, IPC is disabled 00:04:16.319 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.885 EAL: Trying to obtain current memory policy. 00:04:16.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.452 EAL: Restoring previous memory policy: 4 00:04:17.452 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.452 EAL: request: mp_malloc_sync 00:04:17.452 EAL: No shared files mode enabled, IPC is disabled 00:04:17.452 EAL: Heap on socket 0 was expanded by 1026MB 00:04:19.351 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.351 EAL: request: mp_malloc_sync 00:04:19.351 EAL: No shared files mode enabled, IPC is disabled 00:04:19.351 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.251 passed 00:04:21.251 00:04:21.251 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.251 suites 1 1 n/a 0 0 00:04:21.251 tests 2 2 2 0 0 00:04:21.251 asserts 497 497 497 0 n/a 00:04:21.251 00:04:21.251 Elapsed time = 8.260 seconds 00:04:21.251 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.251 EAL: request: mp_malloc_sync 00:04:21.251 EAL: No shared files mode enabled, IPC is disabled 00:04:21.251 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.251 EAL: No shared files mode enabled, IPC is disabled 00:04:21.251 EAL: No shared files mode enabled, IPC is disabled 00:04:21.251 EAL: No shared files mode enabled, IPC is disabled 00:04:21.251 00:04:21.251 real 0m8.533s 00:04:21.251 user 0m7.379s 00:04:21.251 sys 0m1.088s 00:04:21.251 19:34:10 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.251 19:34:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.251 ************************************ 00:04:21.251 END TEST env_vtophys 00:04:21.251 ************************************ 00:04:21.251 19:34:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.251 19:34:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.251 19:34:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.251 19:34:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.251 ************************************ 00:04:21.251 START TEST env_pci 00:04:21.251 ************************************ 00:04:21.251 19:34:10 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.251 00:04:21.251 00:04:21.251 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.251 http://cunit.sourceforge.net/ 00:04:21.251 00:04:21.251 00:04:21.251 Suite: pci 00:04:21.251 Test: pci_hook ...[2024-10-13 19:34:10.751927] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2836905 has claimed it 00:04:21.251 EAL: Cannot find device (10000:00:01.0) 00:04:21.251 EAL: Failed to attach device on primary process 00:04:21.251 passed 00:04:21.251 00:04:21.251 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.251 suites 1 1 n/a 0 0 00:04:21.251 tests 1 1 1 0 0 00:04:21.251 asserts 25 25 25 0 n/a 00:04:21.251 00:04:21.251 Elapsed time = 0.043 seconds 00:04:21.251 00:04:21.251 real 0m0.094s 00:04:21.251 user 0m0.043s 00:04:21.251 sys 0m0.050s 00:04:21.251 19:34:10 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.251 19:34:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.251 ************************************ 00:04:21.251 END TEST env_pci 00:04:21.251 ************************************ 00:04:21.251 19:34:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.251 19:34:10 env -- env/env.sh@15 -- # uname 00:04:21.251 19:34:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.251 19:34:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.251 19:34:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.251 19:34:10 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:21.251 19:34:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.251 19:34:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.251 ************************************ 00:04:21.251 START TEST env_dpdk_post_init 00:04:21.251 ************************************ 00:04:21.251 19:34:10 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.251 EAL: Detected CPU lcores: 48 00:04:21.251 EAL: Detected NUMA nodes: 2 00:04:21.251 EAL: Detected shared linkage of DPDK 00:04:21.251 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.251 EAL: Selected IOVA mode 'VA' 00:04:21.251 EAL: VFIO support initialized 00:04:21.251 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.508 EAL: Using IOMMU type 1 (Type 1) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:21.508 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:22.444 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:25.724 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:25.724 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:25.724 Starting DPDK initialization... 00:04:25.724 Starting SPDK post initialization... 00:04:25.724 SPDK NVMe probe 00:04:25.724 Attaching to 0000:88:00.0 00:04:25.724 Attached to 0000:88:00.0 00:04:25.724 Cleaning up... 00:04:25.724 00:04:25.724 real 0m4.578s 00:04:25.724 user 0m3.110s 00:04:25.724 sys 0m0.523s 00:04:25.724 19:34:15 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.724 19:34:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.724 ************************************ 00:04:25.724 END TEST env_dpdk_post_init 00:04:25.724 ************************************ 00:04:25.724 19:34:15 env -- env/env.sh@26 -- # uname 00:04:25.724 19:34:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:25.724 19:34:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.724 19:34:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.724 19:34:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.724 19:34:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.724 ************************************ 00:04:25.724 START TEST env_mem_callbacks 00:04:25.724 ************************************ 00:04:25.724 19:34:15 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.724 EAL: Detected CPU lcores: 48 00:04:25.724 EAL: Detected NUMA nodes: 2 00:04:25.724 EAL: Detected shared linkage of DPDK 00:04:25.982 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.982 EAL: Selected IOVA mode 'VA' 00:04:25.982 EAL: VFIO support initialized 00:04:25.982 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.982 00:04:25.982 00:04:25.982 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.982 http://cunit.sourceforge.net/ 00:04:25.982 00:04:25.982 00:04:25.982 Suite: memory 00:04:25.982 Test: test ... 00:04:25.982 register 0x200000200000 2097152 00:04:25.982 malloc 3145728 00:04:25.982 register 0x200000400000 4194304 00:04:25.982 buf 0x2000004fffc0 len 3145728 PASSED 00:04:25.982 malloc 64 00:04:25.982 buf 0x2000004ffec0 len 64 PASSED 00:04:25.982 malloc 4194304 00:04:25.982 register 0x200000800000 6291456 00:04:25.982 buf 0x2000009fffc0 len 4194304 PASSED 00:04:25.982 free 0x2000004fffc0 3145728 00:04:25.982 free 0x2000004ffec0 64 00:04:25.982 unregister 0x200000400000 4194304 PASSED 00:04:25.982 free 0x2000009fffc0 4194304 00:04:25.982 unregister 0x200000800000 6291456 PASSED 00:04:25.982 malloc 8388608 00:04:25.982 register 0x200000400000 10485760 00:04:25.982 buf 0x2000005fffc0 len 8388608 PASSED 00:04:25.982 free 0x2000005fffc0 8388608 00:04:25.982 unregister 0x200000400000 10485760 PASSED 00:04:25.982 passed 00:04:25.982 00:04:25.982 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.982 suites 1 1 n/a 0 0 00:04:25.982 tests 1 1 1 0 0 00:04:25.982 asserts 15 15 15 0 n/a 00:04:25.982 00:04:25.982 Elapsed time = 0.060 seconds 00:04:25.982 00:04:25.982 real 0m0.179s 00:04:25.982 user 0m0.093s 00:04:25.982 sys 0m0.085s 00:04:25.982 19:34:15 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.982 19:34:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.982 ************************************ 00:04:25.982 END TEST env_mem_callbacks 00:04:25.982 ************************************ 00:04:25.982 00:04:25.982 real 0m14.009s 00:04:25.982 user 0m11.051s 00:04:25.982 sys 0m1.967s 00:04:25.982 19:34:15 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.982 19:34:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.982 ************************************ 00:04:25.982 END TEST env 00:04:25.982 ************************************ 00:04:25.982 19:34:15 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:25.982 19:34:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.982 19:34:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.982 19:34:15 -- common/autotest_common.sh@10 -- # set +x 00:04:25.982 ************************************ 00:04:25.982 START TEST rpc 00:04:25.982 ************************************ 00:04:25.982 19:34:15 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:25.982 * Looking for test storage... 00:04:25.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.982 19:34:15 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:25.982 19:34:15 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:25.982 19:34:15 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:26.240 19:34:15 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:26.240 19:34:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.240 19:34:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.240 19:34:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.240 19:34:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.240 19:34:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.240 19:34:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.240 19:34:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.240 19:34:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.240 19:34:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.240 19:34:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.241 19:34:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.241 19:34:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.241 19:34:15 rpc -- scripts/common.sh@345 -- # : 1 00:04:26.241 19:34:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.241 19:34:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.241 19:34:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.241 19:34:15 rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.241 19:34:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.241 19:34:15 rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.241 19:34:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.241 19:34:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.241 19:34:15 rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.241 19:34:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.241 19:34:15 rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.241 19:34:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.241 19:34:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.241 19:34:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.241 19:34:15 rpc -- scripts/common.sh@368 -- # return 0 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:26.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.241 --rc genhtml_branch_coverage=1 00:04:26.241 --rc genhtml_function_coverage=1 00:04:26.241 --rc genhtml_legend=1 00:04:26.241 --rc geninfo_all_blocks=1 00:04:26.241 --rc geninfo_unexecuted_blocks=1 00:04:26.241 00:04:26.241 ' 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:26.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.241 --rc genhtml_branch_coverage=1 00:04:26.241 --rc genhtml_function_coverage=1 00:04:26.241 --rc genhtml_legend=1 00:04:26.241 --rc geninfo_all_blocks=1 00:04:26.241 --rc geninfo_unexecuted_blocks=1 00:04:26.241 00:04:26.241 ' 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:26.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.241 --rc genhtml_branch_coverage=1 00:04:26.241 --rc genhtml_function_coverage=1 00:04:26.241 --rc genhtml_legend=1 00:04:26.241 --rc geninfo_all_blocks=1 00:04:26.241 --rc geninfo_unexecuted_blocks=1 00:04:26.241 00:04:26.241 ' 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:26.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.241 --rc genhtml_branch_coverage=1 00:04:26.241 --rc genhtml_function_coverage=1 00:04:26.241 --rc genhtml_legend=1 00:04:26.241 --rc geninfo_all_blocks=1 00:04:26.241 --rc geninfo_unexecuted_blocks=1 00:04:26.241 00:04:26.241 ' 00:04:26.241 19:34:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2837700 00:04:26.241 19:34:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:26.241 19:34:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.241 19:34:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2837700 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@831 -- # '[' -z 2837700 ']' 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.241 19:34:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.241 [2024-10-13 19:34:15.967346] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:26.241 [2024-10-13 19:34:15.967517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837700 ] 00:04:26.499 [2024-10-13 19:34:16.100473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.499 [2024-10-13 19:34:16.236691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:26.499 [2024-10-13 19:34:16.236789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2837700' to capture a snapshot of events at runtime. 00:04:26.499 [2024-10-13 19:34:16.236818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.499 [2024-10-13 19:34:16.236840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.499 [2024-10-13 19:34:16.236872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2837700 for offline analysis/debug. 00:04:26.499 [2024-10-13 19:34:16.238505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.433 19:34:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.433 19:34:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:27.433 19:34:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.433 19:34:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.433 19:34:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.433 19:34:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.433 19:34:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.433 19:34:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.433 19:34:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.433 ************************************ 00:04:27.433 START TEST rpc_integrity 00:04:27.433 ************************************ 00:04:27.433 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:27.433 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.433 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.433 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.433 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.433 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.433 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.692 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.692 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.692 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.692 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.692 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.692 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.692 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.692 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.692 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.692 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.692 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.692 { 00:04:27.692 "name": "Malloc0", 00:04:27.692 "aliases": [ 00:04:27.692 "22e6d189-3c89-402f-ae6e-0200b6907fd2" 00:04:27.692 ], 00:04:27.692 "product_name": "Malloc disk", 00:04:27.692 "block_size": 512, 00:04:27.692 "num_blocks": 16384, 00:04:27.692 "uuid": "22e6d189-3c89-402f-ae6e-0200b6907fd2", 00:04:27.692 "assigned_rate_limits": { 00:04:27.692 "rw_ios_per_sec": 0, 00:04:27.692 "rw_mbytes_per_sec": 0, 00:04:27.692 "r_mbytes_per_sec": 0, 00:04:27.692 "w_mbytes_per_sec": 0 00:04:27.692 }, 00:04:27.692 "claimed": false, 00:04:27.692 "zoned": false, 00:04:27.692 "supported_io_types": { 00:04:27.692 "read": true, 00:04:27.692 "write": true, 00:04:27.692 "unmap": true, 00:04:27.692 "flush": true, 00:04:27.692 "reset": true, 00:04:27.692 "nvme_admin": false, 00:04:27.692 "nvme_io": false, 00:04:27.692 "nvme_io_md": false, 00:04:27.692 "write_zeroes": true, 00:04:27.692 "zcopy": true, 00:04:27.692 "get_zone_info": false, 00:04:27.692 "zone_management": false, 00:04:27.692 "zone_append": false, 00:04:27.692 "compare": false, 00:04:27.692 "compare_and_write": false, 00:04:27.692 "abort": true, 00:04:27.692 "seek_hole": false, 00:04:27.692 "seek_data": false, 00:04:27.692 "copy": true, 00:04:27.692 "nvme_iov_md": false 00:04:27.692 }, 00:04:27.692 "memory_domains": [ 00:04:27.692 { 00:04:27.692 "dma_device_id": "system", 00:04:27.692 "dma_device_type": 1 00:04:27.692 }, 00:04:27.692 { 00:04:27.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.693 "dma_device_type": 2 00:04:27.693 } 00:04:27.693 ], 00:04:27.693 "driver_specific": {} 00:04:27.693 } 00:04:27.693 ]' 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.693 [2024-10-13 19:34:17.351032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.693 [2024-10-13 19:34:17.351110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.693 [2024-10-13 19:34:17.351162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:27.693 [2024-10-13 19:34:17.351188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.693 [2024-10-13 19:34:17.354047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.693 [2024-10-13 19:34:17.354087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.693 Passthru0 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.693 { 00:04:27.693 "name": "Malloc0", 00:04:27.693 "aliases": [ 00:04:27.693 "22e6d189-3c89-402f-ae6e-0200b6907fd2" 00:04:27.693 ], 00:04:27.693 "product_name": "Malloc disk", 00:04:27.693 "block_size": 512, 00:04:27.693 "num_blocks": 16384, 00:04:27.693 "uuid": "22e6d189-3c89-402f-ae6e-0200b6907fd2", 00:04:27.693 "assigned_rate_limits": { 00:04:27.693 "rw_ios_per_sec": 0, 00:04:27.693 "rw_mbytes_per_sec": 0, 00:04:27.693 "r_mbytes_per_sec": 0, 00:04:27.693 "w_mbytes_per_sec": 0 00:04:27.693 }, 00:04:27.693 "claimed": true, 00:04:27.693 "claim_type": "exclusive_write", 00:04:27.693 "zoned": false, 00:04:27.693 "supported_io_types": { 00:04:27.693 "read": true, 00:04:27.693 "write": true, 00:04:27.693 "unmap": true, 00:04:27.693 "flush": true, 00:04:27.693 "reset": true, 00:04:27.693 "nvme_admin": false, 00:04:27.693 "nvme_io": false, 00:04:27.693 "nvme_io_md": false, 00:04:27.693 "write_zeroes": true, 00:04:27.693 "zcopy": true, 00:04:27.693 "get_zone_info": false, 00:04:27.693 "zone_management": false, 00:04:27.693 "zone_append": false, 00:04:27.693 "compare": false, 00:04:27.693 "compare_and_write": false, 00:04:27.693 "abort": true, 00:04:27.693 "seek_hole": false, 00:04:27.693 "seek_data": false, 00:04:27.693 "copy": true, 00:04:27.693 "nvme_iov_md": false 00:04:27.693 }, 00:04:27.693 "memory_domains": [ 00:04:27.693 { 00:04:27.693 "dma_device_id": "system", 00:04:27.693 "dma_device_type": 1 00:04:27.693 }, 00:04:27.693 { 00:04:27.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.693 "dma_device_type": 2 00:04:27.693 } 00:04:27.693 ], 00:04:27.693 "driver_specific": {} 00:04:27.693 }, 00:04:27.693 { 00:04:27.693 "name": "Passthru0", 00:04:27.693 "aliases": [ 00:04:27.693 "e72189db-f854-5209-b53c-658ff149bdfe" 00:04:27.693 ], 00:04:27.693 "product_name": "passthru", 00:04:27.693 "block_size": 512, 00:04:27.693 "num_blocks": 16384, 00:04:27.693 "uuid": "e72189db-f854-5209-b53c-658ff149bdfe", 00:04:27.693 "assigned_rate_limits": { 00:04:27.693 "rw_ios_per_sec": 0, 00:04:27.693 "rw_mbytes_per_sec": 0, 00:04:27.693 "r_mbytes_per_sec": 0, 00:04:27.693 "w_mbytes_per_sec": 0 00:04:27.693 }, 00:04:27.693 "claimed": false, 00:04:27.693 "zoned": false, 00:04:27.693 "supported_io_types": { 00:04:27.693 "read": true, 00:04:27.693 "write": true, 00:04:27.693 "unmap": true, 00:04:27.693 "flush": true, 00:04:27.693 "reset": true, 00:04:27.693 "nvme_admin": false, 00:04:27.693 "nvme_io": false, 00:04:27.693 "nvme_io_md": false, 00:04:27.693 "write_zeroes": true, 00:04:27.693 "zcopy": true, 00:04:27.693 "get_zone_info": false, 00:04:27.693 "zone_management": false, 00:04:27.693 "zone_append": false, 00:04:27.693 "compare": false, 00:04:27.693 "compare_and_write": false, 00:04:27.693 "abort": true, 00:04:27.693 "seek_hole": false, 00:04:27.693 "seek_data": false, 00:04:27.693 "copy": true, 00:04:27.693 "nvme_iov_md": false 00:04:27.693 }, 00:04:27.693 "memory_domains": [ 00:04:27.693 { 00:04:27.693 "dma_device_id": "system", 00:04:27.693 "dma_device_type": 1 00:04:27.693 }, 00:04:27.693 { 00:04:27.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.693 "dma_device_type": 2 00:04:27.693 } 00:04:27.693 ], 00:04:27.693 "driver_specific": { 00:04:27.693 "passthru": { 00:04:27.693 "name": "Passthru0", 00:04:27.693 "base_bdev_name": "Malloc0" 00:04:27.693 } 00:04:27.693 } 00:04:27.693 } 00:04:27.693 ]' 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.693 19:34:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.693 00:04:27.693 real 0m0.264s 00:04:27.693 user 0m0.161s 00:04:27.693 sys 0m0.015s 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.693 19:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.693 ************************************ 00:04:27.693 END TEST rpc_integrity 00:04:27.693 ************************************ 00:04:27.951 19:34:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.952 19:34:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.952 19:34:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.952 19:34:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.952 ************************************ 00:04:27.952 START TEST rpc_plugins 00:04:27.952 ************************************ 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.952 { 00:04:27.952 "name": "Malloc1", 00:04:27.952 "aliases": [ 00:04:27.952 "d756998b-5b8a-4ff0-8fcb-4fbfa0466931" 00:04:27.952 ], 00:04:27.952 "product_name": "Malloc disk", 00:04:27.952 "block_size": 4096, 00:04:27.952 "num_blocks": 256, 00:04:27.952 "uuid": "d756998b-5b8a-4ff0-8fcb-4fbfa0466931", 00:04:27.952 "assigned_rate_limits": { 00:04:27.952 "rw_ios_per_sec": 0, 00:04:27.952 "rw_mbytes_per_sec": 0, 00:04:27.952 "r_mbytes_per_sec": 0, 00:04:27.952 "w_mbytes_per_sec": 0 00:04:27.952 }, 00:04:27.952 "claimed": false, 00:04:27.952 "zoned": false, 00:04:27.952 "supported_io_types": { 00:04:27.952 "read": true, 00:04:27.952 "write": true, 00:04:27.952 "unmap": true, 00:04:27.952 "flush": true, 00:04:27.952 "reset": true, 00:04:27.952 "nvme_admin": false, 00:04:27.952 "nvme_io": false, 00:04:27.952 "nvme_io_md": false, 00:04:27.952 "write_zeroes": true, 00:04:27.952 "zcopy": true, 00:04:27.952 "get_zone_info": false, 00:04:27.952 "zone_management": false, 00:04:27.952 "zone_append": false, 00:04:27.952 "compare": false, 00:04:27.952 "compare_and_write": false, 00:04:27.952 "abort": true, 00:04:27.952 "seek_hole": false, 00:04:27.952 "seek_data": false, 00:04:27.952 "copy": true, 00:04:27.952 "nvme_iov_md": false 00:04:27.952 }, 00:04:27.952 "memory_domains": [ 00:04:27.952 { 00:04:27.952 "dma_device_id": "system", 00:04:27.952 "dma_device_type": 1 00:04:27.952 }, 00:04:27.952 { 00:04:27.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.952 "dma_device_type": 2 00:04:27.952 } 00:04:27.952 ], 00:04:27.952 "driver_specific": {} 00:04:27.952 } 00:04:27.952 ]' 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.952 19:34:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.952 00:04:27.952 real 0m0.117s 00:04:27.952 user 0m0.078s 00:04:27.952 sys 0m0.007s 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.952 19:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.952 ************************************ 00:04:27.952 END TEST rpc_plugins 00:04:27.952 ************************************ 00:04:27.952 19:34:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.952 19:34:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.952 19:34:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.952 19:34:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.952 ************************************ 00:04:27.952 START TEST rpc_trace_cmd_test 00:04:27.952 ************************************ 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.952 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2837700", 00:04:27.952 "tpoint_group_mask": "0x8", 00:04:27.952 "iscsi_conn": { 00:04:27.952 "mask": "0x2", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "scsi": { 00:04:27.952 "mask": "0x4", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "bdev": { 00:04:27.952 "mask": "0x8", 00:04:27.952 "tpoint_mask": "0xffffffffffffffff" 00:04:27.952 }, 00:04:27.952 "nvmf_rdma": { 00:04:27.952 "mask": "0x10", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "nvmf_tcp": { 00:04:27.952 "mask": "0x20", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "ftl": { 00:04:27.952 "mask": "0x40", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "blobfs": { 00:04:27.952 "mask": "0x80", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "dsa": { 00:04:27.952 "mask": "0x200", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "thread": { 00:04:27.952 "mask": "0x400", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "nvme_pcie": { 00:04:27.952 "mask": "0x800", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "iaa": { 00:04:27.952 "mask": "0x1000", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "nvme_tcp": { 00:04:27.952 "mask": "0x2000", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "bdev_nvme": { 00:04:27.952 "mask": "0x4000", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "sock": { 00:04:27.952 "mask": "0x8000", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "blob": { 00:04:27.952 "mask": "0x10000", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "bdev_raid": { 00:04:27.952 "mask": "0x20000", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 }, 00:04:27.952 "scheduler": { 00:04:27.952 "mask": "0x40000", 00:04:27.952 "tpoint_mask": "0x0" 00:04:27.952 } 00:04:27.952 }' 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:27.952 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.210 00:04:28.210 real 0m0.199s 00:04:28.210 user 0m0.172s 00:04:28.210 sys 0m0.015s 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.210 19:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.210 ************************************ 00:04:28.211 END TEST rpc_trace_cmd_test 00:04:28.211 ************************************ 00:04:28.211 19:34:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.211 19:34:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.211 19:34:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.211 19:34:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.211 19:34:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.211 19:34:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.211 ************************************ 00:04:28.211 START TEST rpc_daemon_integrity 00:04:28.211 ************************************ 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.211 19:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.211 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.211 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.211 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.211 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.211 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.211 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.211 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.211 { 00:04:28.211 "name": "Malloc2", 00:04:28.211 "aliases": [ 00:04:28.211 "c0b8a142-3d59-4aba-8fbe-b27015c33f47" 00:04:28.211 ], 00:04:28.211 "product_name": "Malloc disk", 00:04:28.211 "block_size": 512, 00:04:28.211 "num_blocks": 16384, 00:04:28.211 "uuid": "c0b8a142-3d59-4aba-8fbe-b27015c33f47", 00:04:28.211 "assigned_rate_limits": { 00:04:28.211 "rw_ios_per_sec": 0, 00:04:28.211 "rw_mbytes_per_sec": 0, 00:04:28.211 "r_mbytes_per_sec": 0, 00:04:28.211 "w_mbytes_per_sec": 0 00:04:28.211 }, 00:04:28.211 "claimed": false, 00:04:28.211 "zoned": false, 00:04:28.211 "supported_io_types": { 00:04:28.211 "read": true, 00:04:28.211 "write": true, 00:04:28.211 "unmap": true, 00:04:28.211 "flush": true, 00:04:28.211 "reset": true, 00:04:28.211 "nvme_admin": false, 00:04:28.211 "nvme_io": false, 00:04:28.211 "nvme_io_md": false, 00:04:28.211 "write_zeroes": true, 00:04:28.211 "zcopy": true, 00:04:28.211 "get_zone_info": false, 00:04:28.211 "zone_management": false, 00:04:28.211 "zone_append": false, 00:04:28.211 "compare": false, 00:04:28.211 "compare_and_write": false, 00:04:28.211 "abort": true, 00:04:28.211 "seek_hole": false, 00:04:28.211 "seek_data": false, 00:04:28.211 "copy": true, 00:04:28.211 "nvme_iov_md": false 00:04:28.211 }, 00:04:28.211 "memory_domains": [ 00:04:28.211 { 00:04:28.211 "dma_device_id": "system", 00:04:28.211 "dma_device_type": 1 00:04:28.211 }, 00:04:28.211 { 00:04:28.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.211 "dma_device_type": 2 00:04:28.211 } 00:04:28.211 ], 00:04:28.211 "driver_specific": {} 00:04:28.211 } 00:04:28.211 ]' 00:04:28.211 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.469 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.469 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.469 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.469 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.469 [2024-10-13 19:34:18.060244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.469 [2024-10-13 19:34:18.060313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.469 [2024-10-13 19:34:18.060357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:28.469 [2024-10-13 19:34:18.060382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.469 [2024-10-13 19:34:18.063164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.469 [2024-10-13 19:34:18.063204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.469 Passthru0 00:04:28.469 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.470 { 00:04:28.470 "name": "Malloc2", 00:04:28.470 "aliases": [ 00:04:28.470 "c0b8a142-3d59-4aba-8fbe-b27015c33f47" 00:04:28.470 ], 00:04:28.470 "product_name": "Malloc disk", 00:04:28.470 "block_size": 512, 00:04:28.470 "num_blocks": 16384, 00:04:28.470 "uuid": "c0b8a142-3d59-4aba-8fbe-b27015c33f47", 00:04:28.470 "assigned_rate_limits": { 00:04:28.470 "rw_ios_per_sec": 0, 00:04:28.470 "rw_mbytes_per_sec": 0, 00:04:28.470 "r_mbytes_per_sec": 0, 00:04:28.470 "w_mbytes_per_sec": 0 00:04:28.470 }, 00:04:28.470 "claimed": true, 00:04:28.470 "claim_type": "exclusive_write", 00:04:28.470 "zoned": false, 00:04:28.470 "supported_io_types": { 00:04:28.470 "read": true, 00:04:28.470 "write": true, 00:04:28.470 "unmap": true, 00:04:28.470 "flush": true, 00:04:28.470 "reset": true, 00:04:28.470 "nvme_admin": false, 00:04:28.470 "nvme_io": false, 00:04:28.470 "nvme_io_md": false, 00:04:28.470 "write_zeroes": true, 00:04:28.470 "zcopy": true, 00:04:28.470 "get_zone_info": false, 00:04:28.470 "zone_management": false, 00:04:28.470 "zone_append": false, 00:04:28.470 "compare": false, 00:04:28.470 "compare_and_write": false, 00:04:28.470 "abort": true, 00:04:28.470 "seek_hole": false, 00:04:28.470 "seek_data": false, 00:04:28.470 "copy": true, 00:04:28.470 "nvme_iov_md": false 00:04:28.470 }, 00:04:28.470 "memory_domains": [ 00:04:28.470 { 00:04:28.470 "dma_device_id": "system", 00:04:28.470 "dma_device_type": 1 00:04:28.470 }, 00:04:28.470 { 00:04:28.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.470 "dma_device_type": 2 00:04:28.470 } 00:04:28.470 ], 00:04:28.470 "driver_specific": {} 00:04:28.470 }, 00:04:28.470 { 00:04:28.470 "name": "Passthru0", 00:04:28.470 "aliases": [ 00:04:28.470 "a7cb0ecf-d5a8-5aa5-83ca-14904c9c4b69" 00:04:28.470 ], 00:04:28.470 "product_name": "passthru", 00:04:28.470 "block_size": 512, 00:04:28.470 "num_blocks": 16384, 00:04:28.470 "uuid": "a7cb0ecf-d5a8-5aa5-83ca-14904c9c4b69", 00:04:28.470 "assigned_rate_limits": { 00:04:28.470 "rw_ios_per_sec": 0, 00:04:28.470 "rw_mbytes_per_sec": 0, 00:04:28.470 "r_mbytes_per_sec": 0, 00:04:28.470 "w_mbytes_per_sec": 0 00:04:28.470 }, 00:04:28.470 "claimed": false, 00:04:28.470 "zoned": false, 00:04:28.470 "supported_io_types": { 00:04:28.470 "read": true, 00:04:28.470 "write": true, 00:04:28.470 "unmap": true, 00:04:28.470 "flush": true, 00:04:28.470 "reset": true, 00:04:28.470 "nvme_admin": false, 00:04:28.470 "nvme_io": false, 00:04:28.470 "nvme_io_md": false, 00:04:28.470 "write_zeroes": true, 00:04:28.470 "zcopy": true, 00:04:28.470 "get_zone_info": false, 00:04:28.470 "zone_management": false, 00:04:28.470 "zone_append": false, 00:04:28.470 "compare": false, 00:04:28.470 "compare_and_write": false, 00:04:28.470 "abort": true, 00:04:28.470 "seek_hole": false, 00:04:28.470 "seek_data": false, 00:04:28.470 "copy": true, 00:04:28.470 "nvme_iov_md": false 00:04:28.470 }, 00:04:28.470 "memory_domains": [ 00:04:28.470 { 00:04:28.470 "dma_device_id": "system", 00:04:28.470 "dma_device_type": 1 00:04:28.470 }, 00:04:28.470 { 00:04:28.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.470 "dma_device_type": 2 00:04:28.470 } 00:04:28.470 ], 00:04:28.470 "driver_specific": { 00:04:28.470 "passthru": { 00:04:28.470 "name": "Passthru0", 00:04:28.470 "base_bdev_name": "Malloc2" 00:04:28.470 } 00:04:28.470 } 00:04:28.470 } 00:04:28.470 ]' 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.470 00:04:28.470 real 0m0.248s 00:04:28.470 user 0m0.143s 00:04:28.470 sys 0m0.024s 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.470 19:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.470 ************************************ 00:04:28.470 END TEST rpc_daemon_integrity 00:04:28.470 ************************************ 00:04:28.470 19:34:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.470 19:34:18 rpc -- rpc/rpc.sh@84 -- # killprocess 2837700 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@950 -- # '[' -z 2837700 ']' 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@954 -- # kill -0 2837700 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@955 -- # uname 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2837700 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2837700' 00:04:28.470 killing process with pid 2837700 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@969 -- # kill 2837700 00:04:28.470 19:34:18 rpc -- common/autotest_common.sh@974 -- # wait 2837700 00:04:31.006 00:04:31.006 real 0m4.957s 00:04:31.006 user 0m5.462s 00:04:31.006 sys 0m0.861s 00:04:31.006 19:34:20 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.006 19:34:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.006 ************************************ 00:04:31.006 END TEST rpc 00:04:31.006 ************************************ 00:04:31.006 19:34:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:31.006 19:34:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.006 19:34:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.006 19:34:20 -- common/autotest_common.sh@10 -- # set +x 00:04:31.006 ************************************ 00:04:31.006 START TEST skip_rpc 00:04:31.006 ************************************ 00:04:31.006 19:34:20 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:31.006 * Looking for test storage... 00:04:31.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.006 19:34:20 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:31.006 19:34:20 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:31.006 19:34:20 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:31.297 19:34:20 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.297 19:34:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.298 19:34:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.298 19:34:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.298 19:34:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:31.298 19:34:20 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.298 19:34:20 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.298 --rc genhtml_branch_coverage=1 00:04:31.298 --rc genhtml_function_coverage=1 00:04:31.298 --rc genhtml_legend=1 00:04:31.298 --rc geninfo_all_blocks=1 00:04:31.298 --rc geninfo_unexecuted_blocks=1 00:04:31.298 00:04:31.298 ' 00:04:31.298 19:34:20 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.298 --rc genhtml_branch_coverage=1 00:04:31.298 --rc genhtml_function_coverage=1 00:04:31.298 --rc genhtml_legend=1 00:04:31.298 --rc geninfo_all_blocks=1 00:04:31.298 --rc geninfo_unexecuted_blocks=1 00:04:31.298 00:04:31.298 ' 00:04:31.298 19:34:20 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.298 --rc genhtml_branch_coverage=1 00:04:31.298 --rc genhtml_function_coverage=1 00:04:31.298 --rc genhtml_legend=1 00:04:31.298 --rc geninfo_all_blocks=1 00:04:31.298 --rc geninfo_unexecuted_blocks=1 00:04:31.298 00:04:31.298 ' 00:04:31.298 19:34:20 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.298 --rc genhtml_branch_coverage=1 00:04:31.298 --rc genhtml_function_coverage=1 00:04:31.298 --rc genhtml_legend=1 00:04:31.298 --rc geninfo_all_blocks=1 00:04:31.298 --rc geninfo_unexecuted_blocks=1 00:04:31.298 00:04:31.298 ' 00:04:31.298 19:34:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:31.298 19:34:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:31.298 19:34:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:31.298 19:34:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.298 19:34:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.298 19:34:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.298 ************************************ 00:04:31.298 START TEST skip_rpc 00:04:31.298 ************************************ 00:04:31.298 19:34:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:31.298 19:34:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2838418 00:04:31.298 19:34:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:31.298 19:34:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.298 19:34:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:31.298 [2024-10-13 19:34:21.017806] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:31.298 [2024-10-13 19:34:21.017969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838418 ] 00:04:31.580 [2024-10-13 19:34:21.165711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.580 [2024-10-13 19:34:21.304626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2838418 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2838418 ']' 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2838418 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2838418 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2838418' 00:04:36.844 killing process with pid 2838418 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2838418 00:04:36.844 19:34:25 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2838418 00:04:38.745 00:04:38.745 real 0m7.459s 00:04:38.745 user 0m6.925s 00:04:38.745 sys 0m0.532s 00:04:38.745 19:34:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.745 19:34:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.745 ************************************ 00:04:38.745 END TEST skip_rpc 00:04:38.745 ************************************ 00:04:38.745 19:34:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:38.745 19:34:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.745 19:34:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.745 19:34:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.745 ************************************ 00:04:38.745 START TEST skip_rpc_with_json 00:04:38.745 ************************************ 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2839380 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2839380 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2839380 ']' 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.745 19:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.745 [2024-10-13 19:34:28.517487] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:38.745 [2024-10-13 19:34:28.517634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839380 ] 00:04:39.003 [2024-10-13 19:34:28.644788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.003 [2024-10-13 19:34:28.776010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.937 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.937 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:39.937 19:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:39.937 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.937 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.937 [2024-10-13 19:34:29.748385] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:39.938 request: 00:04:39.938 { 00:04:39.938 "trtype": "tcp", 00:04:39.938 "method": "nvmf_get_transports", 00:04:39.938 "req_id": 1 00:04:39.938 } 00:04:39.938 Got JSON-RPC error response 00:04:39.938 response: 00:04:39.938 { 00:04:39.938 "code": -19, 00:04:39.938 "message": "No such device" 00:04:39.938 } 00:04:39.938 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.196 [2024-10-13 19:34:29.756529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.196 19:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.196 { 00:04:40.196 "subsystems": [ 00:04:40.196 { 00:04:40.196 "subsystem": "fsdev", 00:04:40.196 "config": [ 00:04:40.196 { 00:04:40.196 "method": "fsdev_set_opts", 00:04:40.196 "params": { 00:04:40.196 "fsdev_io_pool_size": 65535, 00:04:40.196 "fsdev_io_cache_size": 256 00:04:40.196 } 00:04:40.196 } 00:04:40.196 ] 00:04:40.196 }, 00:04:40.196 { 00:04:40.196 "subsystem": "keyring", 00:04:40.196 "config": [] 00:04:40.196 }, 00:04:40.196 { 00:04:40.196 "subsystem": "iobuf", 00:04:40.196 "config": [ 00:04:40.196 { 00:04:40.196 "method": "iobuf_set_options", 00:04:40.196 "params": { 00:04:40.196 "small_pool_count": 8192, 00:04:40.196 "large_pool_count": 1024, 00:04:40.196 "small_bufsize": 8192, 00:04:40.196 "large_bufsize": 135168 00:04:40.196 } 00:04:40.196 } 00:04:40.196 ] 00:04:40.196 }, 00:04:40.196 { 00:04:40.196 "subsystem": "sock", 00:04:40.196 "config": [ 00:04:40.196 { 00:04:40.196 "method": "sock_set_default_impl", 00:04:40.196 "params": { 00:04:40.196 "impl_name": "posix" 00:04:40.196 } 00:04:40.196 }, 00:04:40.196 { 00:04:40.196 "method": "sock_impl_set_options", 00:04:40.196 "params": { 00:04:40.196 "impl_name": "ssl", 00:04:40.196 "recv_buf_size": 4096, 00:04:40.196 "send_buf_size": 4096, 00:04:40.196 "enable_recv_pipe": true, 00:04:40.196 "enable_quickack": false, 00:04:40.196 "enable_placement_id": 0, 00:04:40.196 "enable_zerocopy_send_server": true, 00:04:40.196 "enable_zerocopy_send_client": false, 00:04:40.196 "zerocopy_threshold": 0, 00:04:40.196 "tls_version": 0, 00:04:40.196 "enable_ktls": false 00:04:40.196 } 00:04:40.196 }, 00:04:40.196 { 00:04:40.196 "method": "sock_impl_set_options", 00:04:40.196 "params": { 00:04:40.196 "impl_name": "posix", 00:04:40.196 "recv_buf_size": 2097152, 00:04:40.196 "send_buf_size": 2097152, 00:04:40.196 "enable_recv_pipe": true, 00:04:40.196 "enable_quickack": false, 00:04:40.196 "enable_placement_id": 0, 00:04:40.196 "enable_zerocopy_send_server": true, 00:04:40.196 "enable_zerocopy_send_client": false, 00:04:40.196 "zerocopy_threshold": 0, 00:04:40.196 "tls_version": 0, 00:04:40.197 "enable_ktls": false 00:04:40.197 } 00:04:40.197 } 00:04:40.197 ] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "vmd", 00:04:40.197 "config": [] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "accel", 00:04:40.197 "config": [ 00:04:40.197 { 00:04:40.197 "method": "accel_set_options", 00:04:40.197 "params": { 00:04:40.197 "small_cache_size": 128, 00:04:40.197 "large_cache_size": 16, 00:04:40.197 "task_count": 2048, 00:04:40.197 "sequence_count": 2048, 00:04:40.197 "buf_count": 2048 00:04:40.197 } 00:04:40.197 } 00:04:40.197 ] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "bdev", 00:04:40.197 "config": [ 00:04:40.197 { 00:04:40.197 "method": "bdev_set_options", 00:04:40.197 "params": { 00:04:40.197 "bdev_io_pool_size": 65535, 00:04:40.197 "bdev_io_cache_size": 256, 00:04:40.197 "bdev_auto_examine": true, 00:04:40.197 "iobuf_small_cache_size": 128, 00:04:40.197 "iobuf_large_cache_size": 16 00:04:40.197 } 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "method": "bdev_raid_set_options", 00:04:40.197 "params": { 00:04:40.197 "process_window_size_kb": 1024, 00:04:40.197 "process_max_bandwidth_mb_sec": 0 00:04:40.197 } 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "method": "bdev_iscsi_set_options", 00:04:40.197 "params": { 00:04:40.197 "timeout_sec": 30 00:04:40.197 } 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "method": "bdev_nvme_set_options", 00:04:40.197 "params": { 00:04:40.197 "action_on_timeout": "none", 00:04:40.197 "timeout_us": 0, 00:04:40.197 "timeout_admin_us": 0, 00:04:40.197 "keep_alive_timeout_ms": 10000, 00:04:40.197 "arbitration_burst": 0, 00:04:40.197 "low_priority_weight": 0, 00:04:40.197 "medium_priority_weight": 0, 00:04:40.197 "high_priority_weight": 0, 00:04:40.197 "nvme_adminq_poll_period_us": 10000, 00:04:40.197 "nvme_ioq_poll_period_us": 0, 00:04:40.197 "io_queue_requests": 0, 00:04:40.197 "delay_cmd_submit": true, 00:04:40.197 "transport_retry_count": 4, 00:04:40.197 "bdev_retry_count": 3, 00:04:40.197 "transport_ack_timeout": 0, 00:04:40.197 "ctrlr_loss_timeout_sec": 0, 00:04:40.197 "reconnect_delay_sec": 0, 00:04:40.197 "fast_io_fail_timeout_sec": 0, 00:04:40.197 "disable_auto_failback": false, 00:04:40.197 "generate_uuids": false, 00:04:40.197 "transport_tos": 0, 00:04:40.197 "nvme_error_stat": false, 00:04:40.197 "rdma_srq_size": 0, 00:04:40.197 "io_path_stat": false, 00:04:40.197 "allow_accel_sequence": false, 00:04:40.197 "rdma_max_cq_size": 0, 00:04:40.197 "rdma_cm_event_timeout_ms": 0, 00:04:40.197 "dhchap_digests": [ 00:04:40.197 "sha256", 00:04:40.197 "sha384", 00:04:40.197 "sha512" 00:04:40.197 ], 00:04:40.197 "dhchap_dhgroups": [ 00:04:40.197 "null", 00:04:40.197 "ffdhe2048", 00:04:40.197 "ffdhe3072", 00:04:40.197 "ffdhe4096", 00:04:40.197 "ffdhe6144", 00:04:40.197 "ffdhe8192" 00:04:40.197 ] 00:04:40.197 } 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "method": "bdev_nvme_set_hotplug", 00:04:40.197 "params": { 00:04:40.197 "period_us": 100000, 00:04:40.197 "enable": false 00:04:40.197 } 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "method": "bdev_wait_for_examine" 00:04:40.197 } 00:04:40.197 ] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "scsi", 00:04:40.197 "config": null 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "scheduler", 00:04:40.197 "config": [ 00:04:40.197 { 00:04:40.197 "method": "framework_set_scheduler", 00:04:40.197 "params": { 00:04:40.197 "name": "static" 00:04:40.197 } 00:04:40.197 } 00:04:40.197 ] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "vhost_scsi", 00:04:40.197 "config": [] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "vhost_blk", 00:04:40.197 "config": [] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "ublk", 00:04:40.197 "config": [] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "nbd", 00:04:40.197 "config": [] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "nvmf", 00:04:40.197 "config": [ 00:04:40.197 { 00:04:40.197 "method": "nvmf_set_config", 00:04:40.197 "params": { 00:04:40.197 "discovery_filter": "match_any", 00:04:40.197 "admin_cmd_passthru": { 00:04:40.197 "identify_ctrlr": false 00:04:40.197 }, 00:04:40.197 "dhchap_digests": [ 00:04:40.197 "sha256", 00:04:40.197 "sha384", 00:04:40.197 "sha512" 00:04:40.197 ], 00:04:40.197 "dhchap_dhgroups": [ 00:04:40.197 "null", 00:04:40.197 "ffdhe2048", 00:04:40.197 "ffdhe3072", 00:04:40.197 "ffdhe4096", 00:04:40.197 "ffdhe6144", 00:04:40.197 "ffdhe8192" 00:04:40.197 ] 00:04:40.197 } 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "method": "nvmf_set_max_subsystems", 00:04:40.197 "params": { 00:04:40.197 "max_subsystems": 1024 00:04:40.197 } 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "method": "nvmf_set_crdt", 00:04:40.197 "params": { 00:04:40.197 "crdt1": 0, 00:04:40.197 "crdt2": 0, 00:04:40.197 "crdt3": 0 00:04:40.197 } 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "method": "nvmf_create_transport", 00:04:40.197 "params": { 00:04:40.197 "trtype": "TCP", 00:04:40.197 "max_queue_depth": 128, 00:04:40.197 "max_io_qpairs_per_ctrlr": 127, 00:04:40.197 "in_capsule_data_size": 4096, 00:04:40.197 "max_io_size": 131072, 00:04:40.197 "io_unit_size": 131072, 00:04:40.197 "max_aq_depth": 128, 00:04:40.197 "num_shared_buffers": 511, 00:04:40.197 "buf_cache_size": 4294967295, 00:04:40.197 "dif_insert_or_strip": false, 00:04:40.197 "zcopy": false, 00:04:40.197 "c2h_success": true, 00:04:40.197 "sock_priority": 0, 00:04:40.197 "abort_timeout_sec": 1, 00:04:40.197 "ack_timeout": 0, 00:04:40.197 "data_wr_pool_size": 0 00:04:40.197 } 00:04:40.197 } 00:04:40.197 ] 00:04:40.197 }, 00:04:40.197 { 00:04:40.197 "subsystem": "iscsi", 00:04:40.197 "config": [ 00:04:40.197 { 00:04:40.197 "method": "iscsi_set_options", 00:04:40.197 "params": { 00:04:40.197 "node_base": "iqn.2016-06.io.spdk", 00:04:40.197 "max_sessions": 128, 00:04:40.197 "max_connections_per_session": 2, 00:04:40.197 "max_queue_depth": 64, 00:04:40.197 "default_time2wait": 2, 00:04:40.197 "default_time2retain": 20, 00:04:40.197 "first_burst_length": 8192, 00:04:40.197 "immediate_data": true, 00:04:40.197 "allow_duplicated_isid": false, 00:04:40.197 "error_recovery_level": 0, 00:04:40.197 "nop_timeout": 60, 00:04:40.197 "nop_in_interval": 30, 00:04:40.197 "disable_chap": false, 00:04:40.197 "require_chap": false, 00:04:40.197 "mutual_chap": false, 00:04:40.197 "chap_group": 0, 00:04:40.197 "max_large_datain_per_connection": 64, 00:04:40.197 "max_r2t_per_connection": 4, 00:04:40.197 "pdu_pool_size": 36864, 00:04:40.197 "immediate_data_pool_size": 16384, 00:04:40.197 "data_out_pool_size": 2048 00:04:40.197 } 00:04:40.197 } 00:04:40.197 ] 00:04:40.197 } 00:04:40.197 ] 00:04:40.197 } 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2839380 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2839380 ']' 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2839380 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2839380 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2839380' 00:04:40.197 killing process with pid 2839380 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2839380 00:04:40.197 19:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2839380 00:04:42.725 19:34:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2839792 00:04:42.725 19:34:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.725 19:34:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2839792 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2839792 ']' 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2839792 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2839792 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2839792' 00:04:47.987 killing process with pid 2839792 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2839792 00:04:47.987 19:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2839792 00:04:50.514 19:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.514 19:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.514 00:04:50.514 real 0m11.410s 00:04:50.514 user 0m10.935s 00:04:50.514 sys 0m1.082s 00:04:50.514 19:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.514 19:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.514 ************************************ 00:04:50.514 END TEST skip_rpc_with_json 00:04:50.514 ************************************ 00:04:50.514 19:34:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:50.514 19:34:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.514 19:34:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.514 19:34:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.514 ************************************ 00:04:50.514 START TEST skip_rpc_with_delay 00:04:50.514 ************************************ 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.515 19:34:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.515 [2024-10-13 19:34:39.969715] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:50.515 19:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:50.515 19:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.515 19:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:50.515 19:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.515 00:04:50.515 real 0m0.148s 00:04:50.515 user 0m0.081s 00:04:50.515 sys 0m0.067s 00:04:50.515 19:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.515 19:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:50.515 ************************************ 00:04:50.515 END TEST skip_rpc_with_delay 00:04:50.515 ************************************ 00:04:50.515 19:34:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:50.515 19:34:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:50.515 19:34:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:50.515 19:34:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.515 19:34:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.515 19:34:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.515 ************************************ 00:04:50.515 START TEST exit_on_failed_rpc_init 00:04:50.515 ************************************ 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2840777 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2840777 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2840777 ']' 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.515 19:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.515 [2024-10-13 19:34:40.179557] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:50.515 [2024-10-13 19:34:40.179762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840777 ] 00:04:50.515 [2024-10-13 19:34:40.314841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.773 [2024-10-13 19:34:40.450117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:51.708 19:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.708 [2024-10-13 19:34:41.505112] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:51.708 [2024-10-13 19:34:41.505258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840916 ] 00:04:51.967 [2024-10-13 19:34:41.642008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.967 [2024-10-13 19:34:41.780124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.967 [2024-10-13 19:34:41.780272] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.967 [2024-10-13 19:34:41.780314] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.967 [2024-10-13 19:34:41.780337] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2840777 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2840777 ']' 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2840777 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2840777 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2840777' 00:04:52.534 killing process with pid 2840777 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2840777 00:04:52.534 19:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2840777 00:04:55.064 00:04:55.064 real 0m4.447s 00:04:55.064 user 0m4.890s 00:04:55.064 sys 0m0.763s 00:04:55.064 19:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.064 19:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.064 ************************************ 00:04:55.064 END TEST exit_on_failed_rpc_init 00:04:55.064 ************************************ 00:04:55.064 19:34:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:55.064 00:04:55.064 real 0m23.803s 00:04:55.064 user 0m22.987s 00:04:55.064 sys 0m2.647s 00:04:55.064 19:34:44 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.064 19:34:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.064 ************************************ 00:04:55.064 END TEST skip_rpc 00:04:55.064 ************************************ 00:04:55.064 19:34:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.064 19:34:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.064 19:34:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.064 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.064 ************************************ 00:04:55.064 START TEST rpc_client 00:04:55.064 ************************************ 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.064 * Looking for test storage... 00:04:55.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.064 19:34:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.064 --rc genhtml_branch_coverage=1 00:04:55.064 --rc genhtml_function_coverage=1 00:04:55.064 --rc genhtml_legend=1 00:04:55.064 --rc geninfo_all_blocks=1 00:04:55.064 --rc geninfo_unexecuted_blocks=1 00:04:55.064 00:04:55.064 ' 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.064 --rc genhtml_branch_coverage=1 00:04:55.064 --rc genhtml_function_coverage=1 00:04:55.064 --rc genhtml_legend=1 00:04:55.064 --rc geninfo_all_blocks=1 00:04:55.064 --rc geninfo_unexecuted_blocks=1 00:04:55.064 00:04:55.064 ' 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.064 --rc genhtml_branch_coverage=1 00:04:55.064 --rc genhtml_function_coverage=1 00:04:55.064 --rc genhtml_legend=1 00:04:55.064 --rc geninfo_all_blocks=1 00:04:55.064 --rc geninfo_unexecuted_blocks=1 00:04:55.064 00:04:55.064 ' 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.064 --rc genhtml_branch_coverage=1 00:04:55.064 --rc genhtml_function_coverage=1 00:04:55.064 --rc genhtml_legend=1 00:04:55.064 --rc geninfo_all_blocks=1 00:04:55.064 --rc geninfo_unexecuted_blocks=1 00:04:55.064 00:04:55.064 ' 00:04:55.064 19:34:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:55.064 OK 00:04:55.064 19:34:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.064 00:04:55.064 real 0m0.184s 00:04:55.064 user 0m0.111s 00:04:55.064 sys 0m0.081s 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.064 19:34:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.064 ************************************ 00:04:55.064 END TEST rpc_client 00:04:55.064 ************************************ 00:04:55.065 19:34:44 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.065 19:34:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.065 19:34:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.065 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.065 ************************************ 00:04:55.065 START TEST json_config 00:04:55.065 ************************************ 00:04:55.065 19:34:44 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.065 19:34:44 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.065 19:34:44 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.065 19:34:44 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.323 19:34:44 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.323 19:34:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.323 19:34:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.323 19:34:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.323 19:34:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.323 19:34:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.323 19:34:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.323 19:34:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.323 19:34:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.323 19:34:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.323 19:34:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.323 19:34:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.323 19:34:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:55.323 19:34:44 json_config -- scripts/common.sh@345 -- # : 1 00:04:55.323 19:34:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.323 19:34:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.323 19:34:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:55.323 19:34:44 json_config -- scripts/common.sh@353 -- # local d=1 00:04:55.323 19:34:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.323 19:34:44 json_config -- scripts/common.sh@355 -- # echo 1 00:04:55.323 19:34:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.323 19:34:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:55.323 19:34:44 json_config -- scripts/common.sh@353 -- # local d=2 00:04:55.323 19:34:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.323 19:34:44 json_config -- scripts/common.sh@355 -- # echo 2 00:04:55.323 19:34:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.323 19:34:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.323 19:34:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.323 19:34:44 json_config -- scripts/common.sh@368 -- # return 0 00:04:55.323 19:34:44 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.323 19:34:44 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.323 --rc genhtml_branch_coverage=1 00:04:55.323 --rc genhtml_function_coverage=1 00:04:55.323 --rc genhtml_legend=1 00:04:55.323 --rc geninfo_all_blocks=1 00:04:55.323 --rc geninfo_unexecuted_blocks=1 00:04:55.323 00:04:55.323 ' 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.324 --rc genhtml_branch_coverage=1 00:04:55.324 --rc genhtml_function_coverage=1 00:04:55.324 --rc genhtml_legend=1 00:04:55.324 --rc geninfo_all_blocks=1 00:04:55.324 --rc geninfo_unexecuted_blocks=1 00:04:55.324 00:04:55.324 ' 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.324 --rc genhtml_branch_coverage=1 00:04:55.324 --rc genhtml_function_coverage=1 00:04:55.324 --rc genhtml_legend=1 00:04:55.324 --rc geninfo_all_blocks=1 00:04:55.324 --rc geninfo_unexecuted_blocks=1 00:04:55.324 00:04:55.324 ' 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.324 --rc genhtml_branch_coverage=1 00:04:55.324 --rc genhtml_function_coverage=1 00:04:55.324 --rc genhtml_legend=1 00:04:55.324 --rc geninfo_all_blocks=1 00:04:55.324 --rc geninfo_unexecuted_blocks=1 00:04:55.324 00:04:55.324 ' 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.324 19:34:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.324 19:34:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.324 19:34:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.324 19:34:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.324 19:34:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.324 19:34:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.324 19:34:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.324 19:34:44 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.324 19:34:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@51 -- # : 0 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.324 19:34:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:55.324 INFO: JSON configuration test init 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.324 19:34:44 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:55.324 19:34:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.324 19:34:44 json_config -- json_config/common.sh@10 -- # shift 00:04:55.324 19:34:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.324 19:34:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.324 19:34:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.324 19:34:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.324 19:34:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.324 19:34:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2841514 00:04:55.324 19:34:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:55.324 19:34:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.324 Waiting for target to run... 00:04:55.324 19:34:44 json_config -- json_config/common.sh@25 -- # waitforlisten 2841514 /var/tmp/spdk_tgt.sock 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@831 -- # '[' -z 2841514 ']' 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.324 19:34:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.324 [2024-10-13 19:34:45.068423] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:55.324 [2024-10-13 19:34:45.068595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841514 ] 00:04:55.891 [2024-10-13 19:34:45.646958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.149 [2024-10-13 19:34:45.778582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.407 19:34:46 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.407 19:34:46 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:56.407 19:34:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.407 00:04:56.407 19:34:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:56.407 19:34:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:56.407 19:34:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.407 19:34:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.407 19:34:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:56.407 19:34:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:56.407 19:34:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.407 19:34:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.407 19:34:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:56.407 19:34:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:56.407 19:34:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:00.588 19:34:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.588 19:34:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:00.588 19:34:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:00.588 19:34:49 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@54 -- # sort 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:00.588 19:34:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.588 19:34:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:00.588 19:34:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:00.589 19:34:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:00.589 19:34:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.589 19:34:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.589 19:34:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:00.589 19:34:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:00.589 19:34:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:00.589 19:34:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.589 19:34:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.846 MallocForNvmf0 00:05:00.846 19:34:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.846 19:34:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.104 MallocForNvmf1 00:05:01.104 19:34:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.104 19:34:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.362 [2024-10-13 19:34:51.070176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.362 19:34:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.362 19:34:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.620 19:34:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.620 19:34:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.877 19:34:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.877 19:34:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.135 19:34:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.135 19:34:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.393 [2024-10-13 19:34:52.137923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.393 19:34:52 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:02.393 19:34:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.393 19:34:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.393 19:34:52 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:02.393 19:34:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.393 19:34:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.393 19:34:52 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:02.393 19:34:52 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.393 19:34:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.650 MallocBdevForConfigChangeCheck 00:05:02.908 19:34:52 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:02.908 19:34:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.908 19:34:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.908 19:34:52 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:02.908 19:34:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.166 19:34:52 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:03.166 INFO: shutting down applications... 00:05:03.166 19:34:52 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:03.166 19:34:52 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:03.166 19:34:52 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:03.166 19:34:52 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:05.064 Calling clear_iscsi_subsystem 00:05:05.064 Calling clear_nvmf_subsystem 00:05:05.064 Calling clear_nbd_subsystem 00:05:05.064 Calling clear_ublk_subsystem 00:05:05.064 Calling clear_vhost_blk_subsystem 00:05:05.064 Calling clear_vhost_scsi_subsystem 00:05:05.064 Calling clear_bdev_subsystem 00:05:05.064 19:34:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:05.064 19:34:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:05.064 19:34:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:05.064 19:34:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.064 19:34:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:05.064 19:34:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:05.322 19:34:54 json_config -- json_config/json_config.sh@352 -- # break 00:05:05.322 19:34:54 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:05.322 19:34:54 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:05.322 19:34:54 json_config -- json_config/common.sh@31 -- # local app=target 00:05:05.322 19:34:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:05.322 19:34:54 json_config -- json_config/common.sh@35 -- # [[ -n 2841514 ]] 00:05:05.322 19:34:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2841514 00:05:05.322 19:34:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:05.322 19:34:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.322 19:34:54 json_config -- json_config/common.sh@41 -- # kill -0 2841514 00:05:05.322 19:34:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.887 19:34:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.887 19:34:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.887 19:34:55 json_config -- json_config/common.sh@41 -- # kill -0 2841514 00:05:05.887 19:34:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.147 19:34:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.147 19:34:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.147 19:34:55 json_config -- json_config/common.sh@41 -- # kill -0 2841514 00:05:06.448 19:34:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.706 19:34:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.706 19:34:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.706 19:34:56 json_config -- json_config/common.sh@41 -- # kill -0 2841514 00:05:06.706 19:34:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.706 19:34:56 json_config -- json_config/common.sh@43 -- # break 00:05:06.706 19:34:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.706 19:34:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.706 SPDK target shutdown done 00:05:06.706 19:34:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:06.706 INFO: relaunching applications... 00:05:06.706 19:34:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.706 19:34:56 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.706 19:34:56 json_config -- json_config/common.sh@10 -- # shift 00:05:06.706 19:34:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.706 19:34:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.706 19:34:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.706 19:34:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.706 19:34:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.706 19:34:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2843026 00:05:06.706 19:34:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.706 Waiting for target to run... 00:05:06.706 19:34:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.706 19:34:56 json_config -- json_config/common.sh@25 -- # waitforlisten 2843026 /var/tmp/spdk_tgt.sock 00:05:06.706 19:34:56 json_config -- common/autotest_common.sh@831 -- # '[' -z 2843026 ']' 00:05:06.706 19:34:56 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.706 19:34:56 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.706 19:34:56 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.706 19:34:56 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.706 19:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.974 [2024-10-13 19:34:56.568236] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:06.974 [2024-10-13 19:34:56.568412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843026 ] 00:05:07.633 [2024-10-13 19:34:57.187120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.633 [2024-10-13 19:34:57.318783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.816 [2024-10-13 19:35:01.112019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.816 [2024-10-13 19:35:01.144626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.816 19:35:01 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.816 19:35:01 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:11.816 19:35:01 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.816 00:05:11.816 19:35:01 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:11.816 19:35:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:11.816 INFO: Checking if target configuration is the same... 00:05:11.816 19:35:01 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.816 19:35:01 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:11.816 19:35:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.816 + '[' 2 -ne 2 ']' 00:05:11.816 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.816 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.816 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.816 +++ basename /dev/fd/62 00:05:11.816 ++ mktemp /tmp/62.XXX 00:05:11.816 + tmp_file_1=/tmp/62.YJn 00:05:11.816 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.816 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.816 + tmp_file_2=/tmp/spdk_tgt_config.json.8q5 00:05:11.816 + ret=0 00:05:11.816 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.816 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.073 + diff -u /tmp/62.YJn /tmp/spdk_tgt_config.json.8q5 00:05:12.073 + echo 'INFO: JSON config files are the same' 00:05:12.073 INFO: JSON config files are the same 00:05:12.073 + rm /tmp/62.YJn /tmp/spdk_tgt_config.json.8q5 00:05:12.073 + exit 0 00:05:12.073 19:35:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:12.073 19:35:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:12.074 INFO: changing configuration and checking if this can be detected... 00:05:12.074 19:35:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.074 19:35:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.332 19:35:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.332 19:35:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:12.332 19:35:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.332 + '[' 2 -ne 2 ']' 00:05:12.332 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.332 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.332 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.332 +++ basename /dev/fd/62 00:05:12.332 ++ mktemp /tmp/62.XXX 00:05:12.332 + tmp_file_1=/tmp/62.y8f 00:05:12.332 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.332 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.332 + tmp_file_2=/tmp/spdk_tgt_config.json.JUf 00:05:12.332 + ret=0 00:05:12.332 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.590 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.590 + diff -u /tmp/62.y8f /tmp/spdk_tgt_config.json.JUf 00:05:12.590 + ret=1 00:05:12.590 + echo '=== Start of file: /tmp/62.y8f ===' 00:05:12.590 + cat /tmp/62.y8f 00:05:12.590 + echo '=== End of file: /tmp/62.y8f ===' 00:05:12.590 + echo '' 00:05:12.590 + echo '=== Start of file: /tmp/spdk_tgt_config.json.JUf ===' 00:05:12.590 + cat /tmp/spdk_tgt_config.json.JUf 00:05:12.590 + echo '=== End of file: /tmp/spdk_tgt_config.json.JUf ===' 00:05:12.590 + echo '' 00:05:12.590 + rm /tmp/62.y8f /tmp/spdk_tgt_config.json.JUf 00:05:12.590 + exit 1 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:12.590 INFO: configuration change detected. 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:12.590 19:35:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.590 19:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@324 -- # [[ -n 2843026 ]] 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.590 19:35:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.590 19:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:12.590 19:35:02 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:12.848 19:35:02 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:12.848 19:35:02 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.848 19:35:02 json_config -- json_config/json_config.sh@330 -- # killprocess 2843026 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@950 -- # '[' -z 2843026 ']' 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@954 -- # kill -0 2843026 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@955 -- # uname 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2843026 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2843026' 00:05:12.848 killing process with pid 2843026 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@969 -- # kill 2843026 00:05:12.848 19:35:02 json_config -- common/autotest_common.sh@974 -- # wait 2843026 00:05:15.377 19:35:04 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.377 19:35:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:15.377 19:35:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.377 19:35:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.377 19:35:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:15.377 19:35:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:15.377 INFO: Success 00:05:15.377 00:05:15.377 real 0m20.041s 00:05:15.377 user 0m21.018s 00:05:15.377 sys 0m3.316s 00:05:15.377 19:35:04 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.377 19:35:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.377 ************************************ 00:05:15.377 END TEST json_config 00:05:15.377 ************************************ 00:05:15.377 19:35:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.377 19:35:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.377 19:35:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.377 19:35:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.377 ************************************ 00:05:15.377 START TEST json_config_extra_key 00:05:15.377 ************************************ 00:05:15.377 19:35:04 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.377 19:35:04 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.377 19:35:04 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.377 19:35:04 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.377 19:35:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.377 19:35:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:15.377 19:35:05 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.377 19:35:05 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.377 --rc genhtml_branch_coverage=1 00:05:15.377 --rc genhtml_function_coverage=1 00:05:15.377 --rc genhtml_legend=1 00:05:15.378 --rc geninfo_all_blocks=1 00:05:15.378 --rc geninfo_unexecuted_blocks=1 00:05:15.378 00:05:15.378 ' 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.378 --rc genhtml_branch_coverage=1 00:05:15.378 --rc genhtml_function_coverage=1 00:05:15.378 --rc genhtml_legend=1 00:05:15.378 --rc geninfo_all_blocks=1 00:05:15.378 --rc geninfo_unexecuted_blocks=1 00:05:15.378 00:05:15.378 ' 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.378 --rc genhtml_branch_coverage=1 00:05:15.378 --rc genhtml_function_coverage=1 00:05:15.378 --rc genhtml_legend=1 00:05:15.378 --rc geninfo_all_blocks=1 00:05:15.378 --rc geninfo_unexecuted_blocks=1 00:05:15.378 00:05:15.378 ' 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.378 --rc genhtml_branch_coverage=1 00:05:15.378 --rc genhtml_function_coverage=1 00:05:15.378 --rc genhtml_legend=1 00:05:15.378 --rc geninfo_all_blocks=1 00:05:15.378 --rc geninfo_unexecuted_blocks=1 00:05:15.378 00:05:15.378 ' 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.378 19:35:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.378 19:35:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.378 19:35:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.378 19:35:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.378 19:35:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.378 19:35:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.378 19:35:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.378 19:35:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:15.378 19:35:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.378 19:35:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:15.378 INFO: launching applications... 00:05:15.378 19:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2844100 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.378 Waiting for target to run... 00:05:15.378 19:35:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2844100 /var/tmp/spdk_tgt.sock 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2844100 ']' 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.378 19:35:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.378 [2024-10-13 19:35:05.159024] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:15.378 [2024-10-13 19:35:05.159173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844100 ] 00:05:15.945 [2024-10-13 19:35:05.568683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.945 [2024-10-13 19:35:05.690675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.879 19:35:06 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.879 19:35:06 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:16.879 00:05:16.879 19:35:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:16.879 INFO: shutting down applications... 00:05:16.879 19:35:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2844100 ]] 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2844100 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844100 00:05:16.879 19:35:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.137 19:35:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.137 19:35:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.137 19:35:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844100 00:05:17.137 19:35:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.703 19:35:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.703 19:35:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.703 19:35:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844100 00:05:17.703 19:35:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.268 19:35:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.268 19:35:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.268 19:35:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844100 00:05:18.268 19:35:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.834 19:35:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.834 19:35:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.834 19:35:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844100 00:05:18.834 19:35:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.400 19:35:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.400 19:35:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.400 19:35:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844100 00:05:19.400 19:35:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.658 19:35:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.658 19:35:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.658 19:35:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844100 00:05:19.658 19:35:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.658 19:35:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:19.658 19:35:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.658 19:35:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.658 SPDK target shutdown done 00:05:19.658 19:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:19.658 Success 00:05:19.658 00:05:19.658 real 0m4.510s 00:05:19.658 user 0m4.182s 00:05:19.658 sys 0m0.655s 00:05:19.658 19:35:09 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.658 19:35:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.658 ************************************ 00:05:19.658 END TEST json_config_extra_key 00:05:19.658 ************************************ 00:05:19.658 19:35:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.658 19:35:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.658 19:35:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.658 19:35:09 -- common/autotest_common.sh@10 -- # set +x 00:05:19.916 ************************************ 00:05:19.916 START TEST alias_rpc 00:05:19.916 ************************************ 00:05:19.916 19:35:09 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.916 * Looking for test storage... 00:05:19.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:19.916 19:35:09 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.916 19:35:09 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.916 19:35:09 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.916 19:35:09 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.916 19:35:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.916 19:35:09 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.916 19:35:09 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.916 --rc genhtml_branch_coverage=1 00:05:19.916 --rc genhtml_function_coverage=1 00:05:19.916 --rc genhtml_legend=1 00:05:19.916 --rc geninfo_all_blocks=1 00:05:19.916 --rc geninfo_unexecuted_blocks=1 00:05:19.916 00:05:19.916 ' 00:05:19.916 19:35:09 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.916 --rc genhtml_branch_coverage=1 00:05:19.917 --rc genhtml_function_coverage=1 00:05:19.917 --rc genhtml_legend=1 00:05:19.917 --rc geninfo_all_blocks=1 00:05:19.917 --rc geninfo_unexecuted_blocks=1 00:05:19.917 00:05:19.917 ' 00:05:19.917 19:35:09 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.917 --rc genhtml_branch_coverage=1 00:05:19.917 --rc genhtml_function_coverage=1 00:05:19.917 --rc genhtml_legend=1 00:05:19.917 --rc geninfo_all_blocks=1 00:05:19.917 --rc geninfo_unexecuted_blocks=1 00:05:19.917 00:05:19.917 ' 00:05:19.917 19:35:09 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.917 --rc genhtml_branch_coverage=1 00:05:19.917 --rc genhtml_function_coverage=1 00:05:19.917 --rc genhtml_legend=1 00:05:19.917 --rc geninfo_all_blocks=1 00:05:19.917 --rc geninfo_unexecuted_blocks=1 00:05:19.917 00:05:19.917 ' 00:05:19.917 19:35:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:19.917 19:35:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2844785 00:05:19.917 19:35:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.917 19:35:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2844785 00:05:19.917 19:35:09 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2844785 ']' 00:05:19.917 19:35:09 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.917 19:35:09 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.917 19:35:09 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.917 19:35:09 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.917 19:35:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.917 [2024-10-13 19:35:09.718577] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:19.917 [2024-10-13 19:35:09.718739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844785 ] 00:05:20.175 [2024-10-13 19:35:09.843765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.175 [2024-10-13 19:35:09.974305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.109 19:35:10 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.109 19:35:10 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:21.109 19:35:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:21.674 19:35:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2844785 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2844785 ']' 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2844785 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2844785 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2844785' 00:05:21.674 killing process with pid 2844785 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@969 -- # kill 2844785 00:05:21.674 19:35:11 alias_rpc -- common/autotest_common.sh@974 -- # wait 2844785 00:05:24.203 00:05:24.203 real 0m4.187s 00:05:24.203 user 0m4.384s 00:05:24.203 sys 0m0.639s 00:05:24.203 19:35:13 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.203 19:35:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.203 ************************************ 00:05:24.203 END TEST alias_rpc 00:05:24.203 ************************************ 00:05:24.203 19:35:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:24.203 19:35:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.203 19:35:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.203 19:35:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.203 19:35:13 -- common/autotest_common.sh@10 -- # set +x 00:05:24.203 ************************************ 00:05:24.203 START TEST spdkcli_tcp 00:05:24.203 ************************************ 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.203 * Looking for test storage... 00:05:24.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.203 19:35:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.203 --rc genhtml_branch_coverage=1 00:05:24.203 --rc genhtml_function_coverage=1 00:05:24.203 --rc genhtml_legend=1 00:05:24.203 --rc geninfo_all_blocks=1 00:05:24.203 --rc geninfo_unexecuted_blocks=1 00:05:24.203 00:05:24.203 ' 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.203 --rc genhtml_branch_coverage=1 00:05:24.203 --rc genhtml_function_coverage=1 00:05:24.203 --rc genhtml_legend=1 00:05:24.203 --rc geninfo_all_blocks=1 00:05:24.203 --rc geninfo_unexecuted_blocks=1 00:05:24.203 00:05:24.203 ' 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.203 --rc genhtml_branch_coverage=1 00:05:24.203 --rc genhtml_function_coverage=1 00:05:24.203 --rc genhtml_legend=1 00:05:24.203 --rc geninfo_all_blocks=1 00:05:24.203 --rc geninfo_unexecuted_blocks=1 00:05:24.203 00:05:24.203 ' 00:05:24.203 19:35:13 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.203 --rc genhtml_branch_coverage=1 00:05:24.203 --rc genhtml_function_coverage=1 00:05:24.203 --rc genhtml_legend=1 00:05:24.203 --rc geninfo_all_blocks=1 00:05:24.203 --rc geninfo_unexecuted_blocks=1 00:05:24.203 00:05:24.203 ' 00:05:24.203 19:35:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:24.203 19:35:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:24.203 19:35:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:24.204 19:35:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:24.204 19:35:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:24.204 19:35:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:24.204 19:35:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:24.204 19:35:13 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.204 19:35:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.204 19:35:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2845281 00:05:24.204 19:35:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:24.204 19:35:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2845281 00:05:24.204 19:35:13 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2845281 ']' 00:05:24.204 19:35:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.204 19:35:13 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.204 19:35:13 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.204 19:35:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.204 19:35:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.204 [2024-10-13 19:35:13.952788] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:24.204 [2024-10-13 19:35:13.952945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845281 ] 00:05:24.462 [2024-10-13 19:35:14.095008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.462 [2024-10-13 19:35:14.235198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.462 [2024-10-13 19:35:14.235199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.395 19:35:15 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.395 19:35:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:25.395 19:35:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2845497 00:05:25.395 19:35:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:25.395 19:35:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:25.653 [ 00:05:25.653 "bdev_malloc_delete", 00:05:25.653 "bdev_malloc_create", 00:05:25.653 "bdev_null_resize", 00:05:25.653 "bdev_null_delete", 00:05:25.653 "bdev_null_create", 00:05:25.653 "bdev_nvme_cuse_unregister", 00:05:25.653 "bdev_nvme_cuse_register", 00:05:25.653 "bdev_opal_new_user", 00:05:25.653 "bdev_opal_set_lock_state", 00:05:25.653 "bdev_opal_delete", 00:05:25.653 "bdev_opal_get_info", 00:05:25.653 "bdev_opal_create", 00:05:25.653 "bdev_nvme_opal_revert", 00:05:25.653 "bdev_nvme_opal_init", 00:05:25.653 "bdev_nvme_send_cmd", 00:05:25.653 "bdev_nvme_set_keys", 00:05:25.653 "bdev_nvme_get_path_iostat", 00:05:25.653 "bdev_nvme_get_mdns_discovery_info", 00:05:25.653 "bdev_nvme_stop_mdns_discovery", 00:05:25.653 "bdev_nvme_start_mdns_discovery", 00:05:25.653 "bdev_nvme_set_multipath_policy", 00:05:25.653 "bdev_nvme_set_preferred_path", 00:05:25.653 "bdev_nvme_get_io_paths", 00:05:25.653 "bdev_nvme_remove_error_injection", 00:05:25.653 "bdev_nvme_add_error_injection", 00:05:25.653 "bdev_nvme_get_discovery_info", 00:05:25.653 "bdev_nvme_stop_discovery", 00:05:25.653 "bdev_nvme_start_discovery", 00:05:25.653 "bdev_nvme_get_controller_health_info", 00:05:25.653 "bdev_nvme_disable_controller", 00:05:25.653 "bdev_nvme_enable_controller", 00:05:25.653 "bdev_nvme_reset_controller", 00:05:25.653 "bdev_nvme_get_transport_statistics", 00:05:25.653 "bdev_nvme_apply_firmware", 00:05:25.653 "bdev_nvme_detach_controller", 00:05:25.653 "bdev_nvme_get_controllers", 00:05:25.653 "bdev_nvme_attach_controller", 00:05:25.653 "bdev_nvme_set_hotplug", 00:05:25.653 "bdev_nvme_set_options", 00:05:25.653 "bdev_passthru_delete", 00:05:25.653 "bdev_passthru_create", 00:05:25.653 "bdev_lvol_set_parent_bdev", 00:05:25.653 "bdev_lvol_set_parent", 00:05:25.653 "bdev_lvol_check_shallow_copy", 00:05:25.653 "bdev_lvol_start_shallow_copy", 00:05:25.653 "bdev_lvol_grow_lvstore", 00:05:25.653 "bdev_lvol_get_lvols", 00:05:25.653 "bdev_lvol_get_lvstores", 00:05:25.653 "bdev_lvol_delete", 00:05:25.653 "bdev_lvol_set_read_only", 00:05:25.653 "bdev_lvol_resize", 00:05:25.653 "bdev_lvol_decouple_parent", 00:05:25.653 "bdev_lvol_inflate", 00:05:25.653 "bdev_lvol_rename", 00:05:25.653 "bdev_lvol_clone_bdev", 00:05:25.653 "bdev_lvol_clone", 00:05:25.653 "bdev_lvol_snapshot", 00:05:25.653 "bdev_lvol_create", 00:05:25.653 "bdev_lvol_delete_lvstore", 00:05:25.653 "bdev_lvol_rename_lvstore", 00:05:25.653 "bdev_lvol_create_lvstore", 00:05:25.653 "bdev_raid_set_options", 00:05:25.653 "bdev_raid_remove_base_bdev", 00:05:25.653 "bdev_raid_add_base_bdev", 00:05:25.653 "bdev_raid_delete", 00:05:25.653 "bdev_raid_create", 00:05:25.653 "bdev_raid_get_bdevs", 00:05:25.653 "bdev_error_inject_error", 00:05:25.653 "bdev_error_delete", 00:05:25.653 "bdev_error_create", 00:05:25.653 "bdev_split_delete", 00:05:25.653 "bdev_split_create", 00:05:25.653 "bdev_delay_delete", 00:05:25.653 "bdev_delay_create", 00:05:25.653 "bdev_delay_update_latency", 00:05:25.653 "bdev_zone_block_delete", 00:05:25.653 "bdev_zone_block_create", 00:05:25.653 "blobfs_create", 00:05:25.653 "blobfs_detect", 00:05:25.653 "blobfs_set_cache_size", 00:05:25.653 "bdev_aio_delete", 00:05:25.653 "bdev_aio_rescan", 00:05:25.653 "bdev_aio_create", 00:05:25.653 "bdev_ftl_set_property", 00:05:25.653 "bdev_ftl_get_properties", 00:05:25.653 "bdev_ftl_get_stats", 00:05:25.653 "bdev_ftl_unmap", 00:05:25.653 "bdev_ftl_unload", 00:05:25.653 "bdev_ftl_delete", 00:05:25.653 "bdev_ftl_load", 00:05:25.653 "bdev_ftl_create", 00:05:25.653 "bdev_virtio_attach_controller", 00:05:25.653 "bdev_virtio_scsi_get_devices", 00:05:25.653 "bdev_virtio_detach_controller", 00:05:25.653 "bdev_virtio_blk_set_hotplug", 00:05:25.653 "bdev_iscsi_delete", 00:05:25.653 "bdev_iscsi_create", 00:05:25.653 "bdev_iscsi_set_options", 00:05:25.653 "accel_error_inject_error", 00:05:25.653 "ioat_scan_accel_module", 00:05:25.653 "dsa_scan_accel_module", 00:05:25.653 "iaa_scan_accel_module", 00:05:25.653 "keyring_file_remove_key", 00:05:25.653 "keyring_file_add_key", 00:05:25.653 "keyring_linux_set_options", 00:05:25.653 "fsdev_aio_delete", 00:05:25.653 "fsdev_aio_create", 00:05:25.653 "iscsi_get_histogram", 00:05:25.654 "iscsi_enable_histogram", 00:05:25.654 "iscsi_set_options", 00:05:25.654 "iscsi_get_auth_groups", 00:05:25.654 "iscsi_auth_group_remove_secret", 00:05:25.654 "iscsi_auth_group_add_secret", 00:05:25.654 "iscsi_delete_auth_group", 00:05:25.654 "iscsi_create_auth_group", 00:05:25.654 "iscsi_set_discovery_auth", 00:05:25.654 "iscsi_get_options", 00:05:25.654 "iscsi_target_node_request_logout", 00:05:25.654 "iscsi_target_node_set_redirect", 00:05:25.654 "iscsi_target_node_set_auth", 00:05:25.654 "iscsi_target_node_add_lun", 00:05:25.654 "iscsi_get_stats", 00:05:25.654 "iscsi_get_connections", 00:05:25.654 "iscsi_portal_group_set_auth", 00:05:25.654 "iscsi_start_portal_group", 00:05:25.654 "iscsi_delete_portal_group", 00:05:25.654 "iscsi_create_portal_group", 00:05:25.654 "iscsi_get_portal_groups", 00:05:25.654 "iscsi_delete_target_node", 00:05:25.654 "iscsi_target_node_remove_pg_ig_maps", 00:05:25.654 "iscsi_target_node_add_pg_ig_maps", 00:05:25.654 "iscsi_create_target_node", 00:05:25.654 "iscsi_get_target_nodes", 00:05:25.654 "iscsi_delete_initiator_group", 00:05:25.654 "iscsi_initiator_group_remove_initiators", 00:05:25.654 "iscsi_initiator_group_add_initiators", 00:05:25.654 "iscsi_create_initiator_group", 00:05:25.654 "iscsi_get_initiator_groups", 00:05:25.654 "nvmf_set_crdt", 00:05:25.654 "nvmf_set_config", 00:05:25.654 "nvmf_set_max_subsystems", 00:05:25.654 "nvmf_stop_mdns_prr", 00:05:25.654 "nvmf_publish_mdns_prr", 00:05:25.654 "nvmf_subsystem_get_listeners", 00:05:25.654 "nvmf_subsystem_get_qpairs", 00:05:25.654 "nvmf_subsystem_get_controllers", 00:05:25.654 "nvmf_get_stats", 00:05:25.654 "nvmf_get_transports", 00:05:25.654 "nvmf_create_transport", 00:05:25.654 "nvmf_get_targets", 00:05:25.654 "nvmf_delete_target", 00:05:25.654 "nvmf_create_target", 00:05:25.654 "nvmf_subsystem_allow_any_host", 00:05:25.654 "nvmf_subsystem_set_keys", 00:05:25.654 "nvmf_subsystem_remove_host", 00:05:25.654 "nvmf_subsystem_add_host", 00:05:25.654 "nvmf_ns_remove_host", 00:05:25.654 "nvmf_ns_add_host", 00:05:25.654 "nvmf_subsystem_remove_ns", 00:05:25.654 "nvmf_subsystem_set_ns_ana_group", 00:05:25.654 "nvmf_subsystem_add_ns", 00:05:25.654 "nvmf_subsystem_listener_set_ana_state", 00:05:25.654 "nvmf_discovery_get_referrals", 00:05:25.654 "nvmf_discovery_remove_referral", 00:05:25.654 "nvmf_discovery_add_referral", 00:05:25.654 "nvmf_subsystem_remove_listener", 00:05:25.654 "nvmf_subsystem_add_listener", 00:05:25.654 "nvmf_delete_subsystem", 00:05:25.654 "nvmf_create_subsystem", 00:05:25.654 "nvmf_get_subsystems", 00:05:25.654 "env_dpdk_get_mem_stats", 00:05:25.654 "nbd_get_disks", 00:05:25.654 "nbd_stop_disk", 00:05:25.654 "nbd_start_disk", 00:05:25.654 "ublk_recover_disk", 00:05:25.654 "ublk_get_disks", 00:05:25.654 "ublk_stop_disk", 00:05:25.654 "ublk_start_disk", 00:05:25.654 "ublk_destroy_target", 00:05:25.654 "ublk_create_target", 00:05:25.654 "virtio_blk_create_transport", 00:05:25.654 "virtio_blk_get_transports", 00:05:25.654 "vhost_controller_set_coalescing", 00:05:25.654 "vhost_get_controllers", 00:05:25.654 "vhost_delete_controller", 00:05:25.654 "vhost_create_blk_controller", 00:05:25.654 "vhost_scsi_controller_remove_target", 00:05:25.654 "vhost_scsi_controller_add_target", 00:05:25.654 "vhost_start_scsi_controller", 00:05:25.654 "vhost_create_scsi_controller", 00:05:25.654 "thread_set_cpumask", 00:05:25.654 "scheduler_set_options", 00:05:25.654 "framework_get_governor", 00:05:25.654 "framework_get_scheduler", 00:05:25.654 "framework_set_scheduler", 00:05:25.654 "framework_get_reactors", 00:05:25.654 "thread_get_io_channels", 00:05:25.654 "thread_get_pollers", 00:05:25.654 "thread_get_stats", 00:05:25.654 "framework_monitor_context_switch", 00:05:25.654 "spdk_kill_instance", 00:05:25.654 "log_enable_timestamps", 00:05:25.654 "log_get_flags", 00:05:25.654 "log_clear_flag", 00:05:25.654 "log_set_flag", 00:05:25.654 "log_get_level", 00:05:25.654 "log_set_level", 00:05:25.654 "log_get_print_level", 00:05:25.654 "log_set_print_level", 00:05:25.654 "framework_enable_cpumask_locks", 00:05:25.654 "framework_disable_cpumask_locks", 00:05:25.654 "framework_wait_init", 00:05:25.654 "framework_start_init", 00:05:25.654 "scsi_get_devices", 00:05:25.654 "bdev_get_histogram", 00:05:25.654 "bdev_enable_histogram", 00:05:25.654 "bdev_set_qos_limit", 00:05:25.654 "bdev_set_qd_sampling_period", 00:05:25.654 "bdev_get_bdevs", 00:05:25.654 "bdev_reset_iostat", 00:05:25.654 "bdev_get_iostat", 00:05:25.654 "bdev_examine", 00:05:25.654 "bdev_wait_for_examine", 00:05:25.654 "bdev_set_options", 00:05:25.654 "accel_get_stats", 00:05:25.654 "accel_set_options", 00:05:25.654 "accel_set_driver", 00:05:25.654 "accel_crypto_key_destroy", 00:05:25.654 "accel_crypto_keys_get", 00:05:25.654 "accel_crypto_key_create", 00:05:25.654 "accel_assign_opc", 00:05:25.654 "accel_get_module_info", 00:05:25.654 "accel_get_opc_assignments", 00:05:25.654 "vmd_rescan", 00:05:25.654 "vmd_remove_device", 00:05:25.654 "vmd_enable", 00:05:25.654 "sock_get_default_impl", 00:05:25.654 "sock_set_default_impl", 00:05:25.654 "sock_impl_set_options", 00:05:25.654 "sock_impl_get_options", 00:05:25.654 "iobuf_get_stats", 00:05:25.654 "iobuf_set_options", 00:05:25.654 "keyring_get_keys", 00:05:25.654 "framework_get_pci_devices", 00:05:25.654 "framework_get_config", 00:05:25.654 "framework_get_subsystems", 00:05:25.654 "fsdev_set_opts", 00:05:25.654 "fsdev_get_opts", 00:05:25.654 "trace_get_info", 00:05:25.654 "trace_get_tpoint_group_mask", 00:05:25.654 "trace_disable_tpoint_group", 00:05:25.654 "trace_enable_tpoint_group", 00:05:25.654 "trace_clear_tpoint_mask", 00:05:25.654 "trace_set_tpoint_mask", 00:05:25.654 "notify_get_notifications", 00:05:25.654 "notify_get_types", 00:05:25.654 "spdk_get_version", 00:05:25.654 "rpc_get_methods" 00:05:25.654 ] 00:05:25.654 19:35:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:25.654 19:35:15 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.654 19:35:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.912 19:35:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:25.912 19:35:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2845281 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2845281 ']' 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2845281 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2845281 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2845281' 00:05:25.912 killing process with pid 2845281 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2845281 00:05:25.912 19:35:15 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2845281 00:05:28.442 00:05:28.442 real 0m4.181s 00:05:28.442 user 0m7.661s 00:05:28.442 sys 0m0.695s 00:05:28.442 19:35:17 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.442 19:35:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.442 ************************************ 00:05:28.442 END TEST spdkcli_tcp 00:05:28.442 ************************************ 00:05:28.442 19:35:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.442 19:35:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.442 19:35:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.442 19:35:17 -- common/autotest_common.sh@10 -- # set +x 00:05:28.442 ************************************ 00:05:28.442 START TEST dpdk_mem_utility 00:05:28.442 ************************************ 00:05:28.442 19:35:17 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.442 * Looking for test storage... 00:05:28.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:28.442 19:35:17 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.442 19:35:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.442 19:35:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.442 19:35:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.442 --rc genhtml_branch_coverage=1 00:05:28.442 --rc genhtml_function_coverage=1 00:05:28.442 --rc genhtml_legend=1 00:05:28.442 --rc geninfo_all_blocks=1 00:05:28.442 --rc geninfo_unexecuted_blocks=1 00:05:28.442 00:05:28.442 ' 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.442 --rc genhtml_branch_coverage=1 00:05:28.442 --rc genhtml_function_coverage=1 00:05:28.442 --rc genhtml_legend=1 00:05:28.442 --rc geninfo_all_blocks=1 00:05:28.442 --rc geninfo_unexecuted_blocks=1 00:05:28.442 00:05:28.442 ' 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.442 --rc genhtml_branch_coverage=1 00:05:28.442 --rc genhtml_function_coverage=1 00:05:28.442 --rc genhtml_legend=1 00:05:28.442 --rc geninfo_all_blocks=1 00:05:28.442 --rc geninfo_unexecuted_blocks=1 00:05:28.442 00:05:28.442 ' 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.442 --rc genhtml_branch_coverage=1 00:05:28.442 --rc genhtml_function_coverage=1 00:05:28.442 --rc genhtml_legend=1 00:05:28.442 --rc geninfo_all_blocks=1 00:05:28.442 --rc geninfo_unexecuted_blocks=1 00:05:28.442 00:05:28.442 ' 00:05:28.442 19:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.442 19:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2845885 00:05:28.442 19:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.442 19:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2845885 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2845885 ']' 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.442 19:35:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.442 [2024-10-13 19:35:18.181740] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:28.442 [2024-10-13 19:35:18.181890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845885 ] 00:05:28.700 [2024-10-13 19:35:18.309111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.700 [2024-10-13 19:35:18.439906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.662 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.662 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:29.662 19:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:29.662 19:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:29.662 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.662 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.662 { 00:05:29.662 "filename": "/tmp/spdk_mem_dump.txt" 00:05:29.662 } 00:05:29.662 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.662 19:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:29.662 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:29.662 1 heaps totaling size 816.000000 MiB 00:05:29.662 size: 816.000000 MiB heap id: 0 00:05:29.662 end heaps---------- 00:05:29.662 9 mempools totaling size 595.772034 MiB 00:05:29.662 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:29.662 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:29.662 size: 92.545471 MiB name: bdev_io_2845885 00:05:29.662 size: 50.003479 MiB name: msgpool_2845885 00:05:29.662 size: 36.509338 MiB name: fsdev_io_2845885 00:05:29.662 size: 21.763794 MiB name: PDU_Pool 00:05:29.662 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:29.662 size: 4.133484 MiB name: evtpool_2845885 00:05:29.662 size: 0.026123 MiB name: Session_Pool 00:05:29.662 end mempools------- 00:05:29.662 6 memzones totaling size 4.142822 MiB 00:05:29.662 size: 1.000366 MiB name: RG_ring_0_2845885 00:05:29.662 size: 1.000366 MiB name: RG_ring_1_2845885 00:05:29.662 size: 1.000366 MiB name: RG_ring_4_2845885 00:05:29.662 size: 1.000366 MiB name: RG_ring_5_2845885 00:05:29.662 size: 0.125366 MiB name: RG_ring_2_2845885 00:05:29.662 size: 0.015991 MiB name: RG_ring_3_2845885 00:05:29.662 end memzones------- 00:05:29.662 19:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:29.921 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:29.921 list of free elements. size: 16.857605 MiB 00:05:29.921 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:29.921 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:29.921 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:29.921 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:29.921 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:29.921 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:29.921 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:29.921 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:29.921 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:29.921 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:29.921 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:29.921 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:29.921 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:29.921 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:29.921 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:29.921 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:29.921 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:29.921 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:29.921 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:29.921 list of standard malloc elements. size: 199.221497 MiB 00:05:29.921 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:29.921 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:29.921 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:29.921 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:29.921 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:29.921 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:29.921 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:29.921 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:29.921 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:29.921 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:29.921 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:29.921 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:29.921 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:29.921 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:29.921 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:29.921 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:29.921 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:29.921 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:29.921 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:29.921 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:29.921 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:29.921 list of memzone associated elements. size: 599.920898 MiB 00:05:29.921 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:29.921 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:29.921 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:29.921 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:29.921 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:29.921 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2845885_0 00:05:29.921 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:29.921 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2845885_0 00:05:29.921 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:29.921 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2845885_0 00:05:29.921 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:29.921 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:29.921 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:29.921 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:29.921 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:29.921 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2845885_0 00:05:29.921 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:29.921 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2845885 00:05:29.921 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:29.921 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2845885 00:05:29.921 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:29.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:29.921 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:29.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:29.921 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:29.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:29.921 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:29.921 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:29.921 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:29.921 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2845885 00:05:29.921 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:29.921 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2845885 00:05:29.921 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:29.921 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2845885 00:05:29.921 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:29.921 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2845885 00:05:29.921 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:29.921 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2845885 00:05:29.921 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:29.921 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2845885 00:05:29.921 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:29.921 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:29.921 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:29.921 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:29.921 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:29.921 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:29.921 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:29.921 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2845885 00:05:29.921 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:29.921 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2845885 00:05:29.921 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:29.921 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:29.921 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:29.921 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:29.922 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:29.922 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2845885 00:05:29.922 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:29.922 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:29.922 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:29.922 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2845885 00:05:29.922 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:29.922 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2845885 00:05:29.922 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:29.922 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2845885 00:05:29.922 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:29.922 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:29.922 19:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:29.922 19:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2845885 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2845885 ']' 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2845885 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2845885 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2845885' 00:05:29.922 killing process with pid 2845885 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2845885 00:05:29.922 19:35:19 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2845885 00:05:32.451 00:05:32.451 real 0m4.007s 00:05:32.451 user 0m4.077s 00:05:32.451 sys 0m0.610s 00:05:32.451 19:35:21 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.451 19:35:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.451 ************************************ 00:05:32.451 END TEST dpdk_mem_utility 00:05:32.451 ************************************ 00:05:32.451 19:35:21 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.451 19:35:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.451 19:35:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.451 19:35:21 -- common/autotest_common.sh@10 -- # set +x 00:05:32.451 ************************************ 00:05:32.451 START TEST event 00:05:32.451 ************************************ 00:05:32.451 19:35:21 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.451 * Looking for test storage... 00:05:32.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:32.451 19:35:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.451 19:35:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.451 19:35:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.451 19:35:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.451 19:35:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.451 19:35:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.451 19:35:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.451 19:35:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.451 19:35:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.451 19:35:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.451 19:35:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.451 19:35:22 event -- scripts/common.sh@344 -- # case "$op" in 00:05:32.451 19:35:22 event -- scripts/common.sh@345 -- # : 1 00:05:32.451 19:35:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.451 19:35:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.451 19:35:22 event -- scripts/common.sh@365 -- # decimal 1 00:05:32.451 19:35:22 event -- scripts/common.sh@353 -- # local d=1 00:05:32.451 19:35:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.451 19:35:22 event -- scripts/common.sh@355 -- # echo 1 00:05:32.451 19:35:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.451 19:35:22 event -- scripts/common.sh@366 -- # decimal 2 00:05:32.451 19:35:22 event -- scripts/common.sh@353 -- # local d=2 00:05:32.451 19:35:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.451 19:35:22 event -- scripts/common.sh@355 -- # echo 2 00:05:32.451 19:35:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.451 19:35:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.451 19:35:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.451 19:35:22 event -- scripts/common.sh@368 -- # return 0 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:32.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.451 --rc genhtml_branch_coverage=1 00:05:32.451 --rc genhtml_function_coverage=1 00:05:32.451 --rc genhtml_legend=1 00:05:32.451 --rc geninfo_all_blocks=1 00:05:32.451 --rc geninfo_unexecuted_blocks=1 00:05:32.451 00:05:32.451 ' 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:32.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.451 --rc genhtml_branch_coverage=1 00:05:32.451 --rc genhtml_function_coverage=1 00:05:32.451 --rc genhtml_legend=1 00:05:32.451 --rc geninfo_all_blocks=1 00:05:32.451 --rc geninfo_unexecuted_blocks=1 00:05:32.451 00:05:32.451 ' 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:32.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.451 --rc genhtml_branch_coverage=1 00:05:32.451 --rc genhtml_function_coverage=1 00:05:32.451 --rc genhtml_legend=1 00:05:32.451 --rc geninfo_all_blocks=1 00:05:32.451 --rc geninfo_unexecuted_blocks=1 00:05:32.451 00:05:32.451 ' 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:32.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.451 --rc genhtml_branch_coverage=1 00:05:32.451 --rc genhtml_function_coverage=1 00:05:32.451 --rc genhtml_legend=1 00:05:32.451 --rc geninfo_all_blocks=1 00:05:32.451 --rc geninfo_unexecuted_blocks=1 00:05:32.451 00:05:32.451 ' 00:05:32.451 19:35:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:32.451 19:35:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:32.451 19:35:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:32.451 19:35:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.451 19:35:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.451 ************************************ 00:05:32.451 START TEST event_perf 00:05:32.451 ************************************ 00:05:32.451 19:35:22 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.451 Running I/O for 1 seconds...[2024-10-13 19:35:22.201444] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:32.451 [2024-10-13 19:35:22.201552] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846487 ] 00:05:32.709 [2024-10-13 19:35:22.331246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.709 [2024-10-13 19:35:22.477469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.709 [2024-10-13 19:35:22.477497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.709 [2024-10-13 19:35:22.477570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.709 [2024-10-13 19:35:22.477580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.082 Running I/O for 1 seconds... 00:05:34.082 lcore 0: 212361 00:05:34.082 lcore 1: 212360 00:05:34.082 lcore 2: 212359 00:05:34.082 lcore 3: 212359 00:05:34.082 done. 00:05:34.082 00:05:34.082 real 0m1.583s 00:05:34.082 user 0m4.421s 00:05:34.082 sys 0m0.148s 00:05:34.082 19:35:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.082 19:35:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.082 ************************************ 00:05:34.082 END TEST event_perf 00:05:34.082 ************************************ 00:05:34.082 19:35:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.082 19:35:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:34.082 19:35:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.082 19:35:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.082 ************************************ 00:05:34.082 START TEST event_reactor 00:05:34.082 ************************************ 00:05:34.082 19:35:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.082 [2024-10-13 19:35:23.834540] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:34.082 [2024-10-13 19:35:23.834645] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846643 ] 00:05:34.341 [2024-10-13 19:35:23.964953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.341 [2024-10-13 19:35:24.101736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.714 test_start 00:05:35.714 oneshot 00:05:35.714 tick 100 00:05:35.714 tick 100 00:05:35.714 tick 250 00:05:35.714 tick 100 00:05:35.714 tick 100 00:05:35.714 tick 100 00:05:35.714 tick 250 00:05:35.714 tick 500 00:05:35.714 tick 100 00:05:35.714 tick 100 00:05:35.714 tick 250 00:05:35.714 tick 100 00:05:35.714 tick 100 00:05:35.714 test_end 00:05:35.714 00:05:35.714 real 0m1.557s 00:05:35.714 user 0m1.407s 00:05:35.714 sys 0m0.143s 00:05:35.714 19:35:25 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.714 19:35:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:35.714 ************************************ 00:05:35.714 END TEST event_reactor 00:05:35.714 ************************************ 00:05:35.714 19:35:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.714 19:35:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:35.714 19:35:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.714 19:35:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.714 ************************************ 00:05:35.714 START TEST event_reactor_perf 00:05:35.714 ************************************ 00:05:35.714 19:35:25 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.714 [2024-10-13 19:35:25.438189] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:35.714 [2024-10-13 19:35:25.438306] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846926 ] 00:05:35.972 [2024-10-13 19:35:25.570011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.972 [2024-10-13 19:35:25.708474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.346 test_start 00:05:37.346 test_end 00:05:37.346 Performance: 268520 events per second 00:05:37.346 00:05:37.346 real 0m1.561s 00:05:37.346 user 0m1.405s 00:05:37.346 sys 0m0.146s 00:05:37.346 19:35:26 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.346 19:35:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.346 ************************************ 00:05:37.346 END TEST event_reactor_perf 00:05:37.346 ************************************ 00:05:37.346 19:35:26 event -- event/event.sh@49 -- # uname -s 00:05:37.346 19:35:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:37.346 19:35:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.346 19:35:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.346 19:35:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.346 19:35:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.346 ************************************ 00:05:37.346 START TEST event_scheduler 00:05:37.346 ************************************ 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.346 * Looking for test storage... 00:05:37.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.346 19:35:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.346 --rc genhtml_branch_coverage=1 00:05:37.346 --rc genhtml_function_coverage=1 00:05:37.346 --rc genhtml_legend=1 00:05:37.346 --rc geninfo_all_blocks=1 00:05:37.346 --rc geninfo_unexecuted_blocks=1 00:05:37.346 00:05:37.346 ' 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.346 --rc genhtml_branch_coverage=1 00:05:37.346 --rc genhtml_function_coverage=1 00:05:37.346 --rc genhtml_legend=1 00:05:37.346 --rc geninfo_all_blocks=1 00:05:37.346 --rc geninfo_unexecuted_blocks=1 00:05:37.346 00:05:37.346 ' 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.346 --rc genhtml_branch_coverage=1 00:05:37.346 --rc genhtml_function_coverage=1 00:05:37.346 --rc genhtml_legend=1 00:05:37.346 --rc geninfo_all_blocks=1 00:05:37.346 --rc geninfo_unexecuted_blocks=1 00:05:37.346 00:05:37.346 ' 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.346 --rc genhtml_branch_coverage=1 00:05:37.346 --rc genhtml_function_coverage=1 00:05:37.346 --rc genhtml_legend=1 00:05:37.346 --rc geninfo_all_blocks=1 00:05:37.346 --rc geninfo_unexecuted_blocks=1 00:05:37.346 00:05:37.346 ' 00:05:37.346 19:35:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:37.346 19:35:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2847116 00:05:37.346 19:35:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:37.346 19:35:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.346 19:35:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2847116 00:05:37.346 19:35:27 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2847116 ']' 00:05:37.347 19:35:27 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.347 19:35:27 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.347 19:35:27 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.347 19:35:27 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.347 19:35:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.605 [2024-10-13 19:35:27.228840] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:37.605 [2024-10-13 19:35:27.228983] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847116 ] 00:05:37.605 [2024-10-13 19:35:27.360045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.906 [2024-10-13 19:35:27.482392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.906 [2024-10-13 19:35:27.482462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.906 [2024-10-13 19:35:27.482493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.906 [2024-10-13 19:35:27.482503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.500 19:35:28 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.500 19:35:28 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:38.500 19:35:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:38.500 19:35:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.500 19:35:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.500 [2024-10-13 19:35:28.185603] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:38.500 [2024-10-13 19:35:28.185689] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:38.500 [2024-10-13 19:35:28.185724] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:38.500 [2024-10-13 19:35:28.185760] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:38.500 [2024-10-13 19:35:28.185780] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:38.500 19:35:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.500 19:35:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:38.500 19:35:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.500 19:35:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.758 [2024-10-13 19:35:28.495322] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:38.758 19:35:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.759 19:35:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:38.759 19:35:28 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.759 19:35:28 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.759 19:35:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 ************************************ 00:05:38.759 START TEST scheduler_create_thread 00:05:38.759 ************************************ 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 2 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 3 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 4 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 5 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 6 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 7 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.759 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.017 8 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.017 9 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.017 10 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.017 19:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.582 19:35:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.582 00:05:39.582 real 0m0.592s 00:05:39.582 user 0m0.009s 00:05:39.582 sys 0m0.004s 00:05:39.582 19:35:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.582 19:35:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.582 ************************************ 00:05:39.582 END TEST scheduler_create_thread 00:05:39.582 ************************************ 00:05:39.582 19:35:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.582 19:35:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2847116 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2847116 ']' 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2847116 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2847116 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2847116' 00:05:39.582 killing process with pid 2847116 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2847116 00:05:39.582 19:35:29 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2847116 00:05:39.840 [2024-10-13 19:35:29.591912] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.825 00:05:40.825 real 0m3.625s 00:05:40.826 user 0m7.394s 00:05:40.826 sys 0m0.513s 00:05:40.826 19:35:30 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.826 19:35:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.826 ************************************ 00:05:40.826 END TEST event_scheduler 00:05:40.826 ************************************ 00:05:41.084 19:35:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.084 19:35:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.084 19:35:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.084 19:35:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.084 19:35:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.084 ************************************ 00:05:41.084 START TEST app_repeat 00:05:41.084 ************************************ 00:05:41.084 19:35:30 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2847582 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2847582' 00:05:41.084 Process app_repeat pid: 2847582 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.084 spdk_app_start Round 0 00:05:41.084 19:35:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2847582 /var/tmp/spdk-nbd.sock 00:05:41.084 19:35:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2847582 ']' 00:05:41.084 19:35:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.084 19:35:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.084 19:35:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.084 19:35:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.084 19:35:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.084 [2024-10-13 19:35:30.734106] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:41.084 [2024-10-13 19:35:30.734264] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847582 ] 00:05:41.084 [2024-10-13 19:35:30.880986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.342 [2024-10-13 19:35:31.023219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.342 [2024-10-13 19:35:31.023226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.274 19:35:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.274 19:35:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.274 19:35:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.274 Malloc0 00:05:42.532 19:35:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.790 Malloc1 00:05:42.790 19:35:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.790 19:35:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.791 19:35:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.049 /dev/nbd0 00:05:43.049 19:35:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.049 19:35:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.049 1+0 records in 00:05:43.049 1+0 records out 00:05:43.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248676 s, 16.5 MB/s 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.049 19:35:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.049 19:35:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.049 19:35:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.049 19:35:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.307 /dev/nbd1 00:05:43.307 19:35:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.307 19:35:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.307 1+0 records in 00:05:43.307 1+0 records out 00:05:43.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214583 s, 19.1 MB/s 00:05:43.307 19:35:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.564 19:35:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.564 19:35:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.564 19:35:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.564 19:35:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.564 19:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.564 19:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.564 19:35:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.564 19:35:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.564 19:35:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.823 { 00:05:43.823 "nbd_device": "/dev/nbd0", 00:05:43.823 "bdev_name": "Malloc0" 00:05:43.823 }, 00:05:43.823 { 00:05:43.823 "nbd_device": "/dev/nbd1", 00:05:43.823 "bdev_name": "Malloc1" 00:05:43.823 } 00:05:43.823 ]' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.823 { 00:05:43.823 "nbd_device": "/dev/nbd0", 00:05:43.823 "bdev_name": "Malloc0" 00:05:43.823 }, 00:05:43.823 { 00:05:43.823 "nbd_device": "/dev/nbd1", 00:05:43.823 "bdev_name": "Malloc1" 00:05:43.823 } 00:05:43.823 ]' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.823 /dev/nbd1' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.823 /dev/nbd1' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.823 256+0 records in 00:05:43.823 256+0 records out 00:05:43.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393486 s, 266 MB/s 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.823 256+0 records in 00:05:43.823 256+0 records out 00:05:43.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253179 s, 41.4 MB/s 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.823 256+0 records in 00:05:43.823 256+0 records out 00:05:43.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302929 s, 34.6 MB/s 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.823 19:35:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.081 19:35:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.340 19:35:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.598 19:35:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.598 19:35:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.163 19:35:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.537 [2024-10-13 19:35:36.057718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.537 [2024-10-13 19:35:36.192363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.537 [2024-10-13 19:35:36.192365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.795 [2024-10-13 19:35:36.404827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.795 [2024-10-13 19:35:36.404908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.169 19:35:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.169 19:35:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:48.169 spdk_app_start Round 1 00:05:48.169 19:35:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2847582 /var/tmp/spdk-nbd.sock 00:05:48.169 19:35:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2847582 ']' 00:05:48.169 19:35:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.169 19:35:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.169 19:35:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.169 19:35:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.169 19:35:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.427 19:35:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.427 19:35:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:48.427 19:35:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.685 Malloc0 00:05:48.685 19:35:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.252 Malloc1 00:05:49.252 19:35:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.252 19:35:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.510 /dev/nbd0 00:05:49.510 19:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.510 19:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.510 1+0 records in 00:05:49.510 1+0 records out 00:05:49.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204314 s, 20.0 MB/s 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.510 19:35:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.510 19:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.510 19:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.510 19:35:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.768 /dev/nbd1 00:05:49.768 19:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.768 19:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.768 1+0 records in 00:05:49.768 1+0 records out 00:05:49.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229258 s, 17.9 MB/s 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.768 19:35:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.768 19:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.768 19:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.768 19:35:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.768 19:35:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.768 19:35:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.026 { 00:05:50.026 "nbd_device": "/dev/nbd0", 00:05:50.026 "bdev_name": "Malloc0" 00:05:50.026 }, 00:05:50.026 { 00:05:50.026 "nbd_device": "/dev/nbd1", 00:05:50.026 "bdev_name": "Malloc1" 00:05:50.026 } 00:05:50.026 ]' 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.026 { 00:05:50.026 "nbd_device": "/dev/nbd0", 00:05:50.026 "bdev_name": "Malloc0" 00:05:50.026 }, 00:05:50.026 { 00:05:50.026 "nbd_device": "/dev/nbd1", 00:05:50.026 "bdev_name": "Malloc1" 00:05:50.026 } 00:05:50.026 ]' 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.026 /dev/nbd1' 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.026 /dev/nbd1' 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.026 256+0 records in 00:05:50.026 256+0 records out 00:05:50.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501928 s, 209 MB/s 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.026 256+0 records in 00:05:50.026 256+0 records out 00:05:50.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252036 s, 41.6 MB/s 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.026 19:35:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.284 256+0 records in 00:05:50.284 256+0 records out 00:05:50.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029548 s, 35.5 MB/s 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.284 19:35:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.542 19:35:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.799 19:35:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.057 19:35:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.057 19:35:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.622 19:35:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.996 [2024-10-13 19:35:42.454673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.996 [2024-10-13 19:35:42.589107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.996 [2024-10-13 19:35:42.589110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.996 [2024-10-13 19:35:42.801752] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.996 [2024-10-13 19:35:42.801866] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.896 19:35:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.896 19:35:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.896 spdk_app_start Round 2 00:05:54.896 19:35:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2847582 /var/tmp/spdk-nbd.sock 00:05:54.896 19:35:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2847582 ']' 00:05:54.896 19:35:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.896 19:35:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.896 19:35:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.896 19:35:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.896 19:35:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.896 19:35:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.896 19:35:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:54.896 19:35:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.154 Malloc0 00:05:55.154 19:35:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.412 Malloc1 00:05:55.412 19:35:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.412 19:35:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.978 /dev/nbd0 00:05:55.978 19:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.978 19:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.978 1+0 records in 00:05:55.978 1+0 records out 00:05:55.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025154 s, 16.3 MB/s 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.978 19:35:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.978 19:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.978 19:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.978 19:35:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.236 /dev/nbd1 00:05:56.236 19:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.236 19:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.236 1+0 records in 00:05:56.236 1+0 records out 00:05:56.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255539 s, 16.0 MB/s 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:56.236 19:35:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:56.236 19:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.236 19:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.236 19:35:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.236 19:35:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.236 19:35:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.494 { 00:05:56.494 "nbd_device": "/dev/nbd0", 00:05:56.494 "bdev_name": "Malloc0" 00:05:56.494 }, 00:05:56.494 { 00:05:56.494 "nbd_device": "/dev/nbd1", 00:05:56.494 "bdev_name": "Malloc1" 00:05:56.494 } 00:05:56.494 ]' 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.494 { 00:05:56.494 "nbd_device": "/dev/nbd0", 00:05:56.494 "bdev_name": "Malloc0" 00:05:56.494 }, 00:05:56.494 { 00:05:56.494 "nbd_device": "/dev/nbd1", 00:05:56.494 "bdev_name": "Malloc1" 00:05:56.494 } 00:05:56.494 ]' 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.494 /dev/nbd1' 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.494 /dev/nbd1' 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.494 256+0 records in 00:05:56.494 256+0 records out 00:05:56.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509432 s, 206 MB/s 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.494 256+0 records in 00:05:56.494 256+0 records out 00:05:56.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245423 s, 42.7 MB/s 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.494 256+0 records in 00:05:56.494 256+0 records out 00:05:56.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290383 s, 36.1 MB/s 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.494 19:35:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.495 19:35:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.060 19:35:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.318 19:35:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.575 19:35:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.575 19:35:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.142 19:35:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.074 [2024-10-13 19:35:48.838390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.332 [2024-10-13 19:35:48.973302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.332 [2024-10-13 19:35:48.973305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.589 [2024-10-13 19:35:49.185907] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.589 [2024-10-13 19:35:49.185987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.962 19:35:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2847582 /var/tmp/spdk-nbd.sock 00:06:00.963 19:35:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2847582 ']' 00:06:00.963 19:35:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.963 19:35:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.963 19:35:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.963 19:35:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.963 19:35:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:01.221 19:35:50 event.app_repeat -- event/event.sh@39 -- # killprocess 2847582 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2847582 ']' 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2847582 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2847582 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2847582' 00:06:01.221 killing process with pid 2847582 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2847582 00:06:01.221 19:35:50 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2847582 00:06:02.592 spdk_app_start is called in Round 0. 00:06:02.592 Shutdown signal received, stop current app iteration 00:06:02.592 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:06:02.592 spdk_app_start is called in Round 1. 00:06:02.592 Shutdown signal received, stop current app iteration 00:06:02.592 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:06:02.592 spdk_app_start is called in Round 2. 00:06:02.592 Shutdown signal received, stop current app iteration 00:06:02.592 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:06:02.592 spdk_app_start is called in Round 3. 00:06:02.592 Shutdown signal received, stop current app iteration 00:06:02.592 19:35:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:02.592 19:35:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:02.592 00:06:02.592 real 0m21.312s 00:06:02.592 user 0m45.473s 00:06:02.592 sys 0m3.386s 00:06:02.592 19:35:51 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.592 19:35:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.592 ************************************ 00:06:02.592 END TEST app_repeat 00:06:02.592 ************************************ 00:06:02.592 19:35:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:02.592 19:35:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:02.592 19:35:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.592 19:35:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.592 19:35:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.592 ************************************ 00:06:02.592 START TEST cpu_locks 00:06:02.592 ************************************ 00:06:02.592 19:35:52 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:02.592 * Looking for test storage... 00:06:02.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.592 19:35:52 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.592 19:35:52 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.592 19:35:52 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.592 19:35:52 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.592 19:35:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:02.592 19:35:52 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.592 19:35:52 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.593 --rc genhtml_branch_coverage=1 00:06:02.593 --rc genhtml_function_coverage=1 00:06:02.593 --rc genhtml_legend=1 00:06:02.593 --rc geninfo_all_blocks=1 00:06:02.593 --rc geninfo_unexecuted_blocks=1 00:06:02.593 00:06:02.593 ' 00:06:02.593 19:35:52 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.593 --rc genhtml_branch_coverage=1 00:06:02.593 --rc genhtml_function_coverage=1 00:06:02.593 --rc genhtml_legend=1 00:06:02.593 --rc geninfo_all_blocks=1 00:06:02.593 --rc geninfo_unexecuted_blocks=1 00:06:02.593 00:06:02.593 ' 00:06:02.593 19:35:52 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.593 --rc genhtml_branch_coverage=1 00:06:02.593 --rc genhtml_function_coverage=1 00:06:02.593 --rc genhtml_legend=1 00:06:02.593 --rc geninfo_all_blocks=1 00:06:02.593 --rc geninfo_unexecuted_blocks=1 00:06:02.593 00:06:02.593 ' 00:06:02.593 19:35:52 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.593 --rc genhtml_branch_coverage=1 00:06:02.593 --rc genhtml_function_coverage=1 00:06:02.593 --rc genhtml_legend=1 00:06:02.593 --rc geninfo_all_blocks=1 00:06:02.593 --rc geninfo_unexecuted_blocks=1 00:06:02.593 00:06:02.593 ' 00:06:02.593 19:35:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:02.593 19:35:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:02.593 19:35:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:02.593 19:35:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:02.593 19:35:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.593 19:35:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.593 19:35:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.593 ************************************ 00:06:02.593 START TEST default_locks 00:06:02.593 ************************************ 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2850339 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2850339 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2850339 ']' 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.593 19:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.593 [2024-10-13 19:35:52.298858] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:02.593 [2024-10-13 19:35:52.299007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850339 ] 00:06:02.851 [2024-10-13 19:35:52.434075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.851 [2024-10-13 19:35:52.573681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.784 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.784 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:03.784 19:35:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2850339 00:06:03.784 19:35:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2850339 00:06:03.784 19:35:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.382 lslocks: write error 00:06:04.382 19:35:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2850339 00:06:04.382 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2850339 ']' 00:06:04.382 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2850339 00:06:04.382 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:04.382 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.382 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2850339 00:06:04.382 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.383 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.383 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2850339' 00:06:04.383 killing process with pid 2850339 00:06:04.383 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2850339 00:06:04.383 19:35:53 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2850339 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2850339 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2850339 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2850339 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2850339 ']' 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2850339) - No such process 00:06:06.950 ERROR: process (pid: 2850339) is no longer running 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.950 00:06:06.950 real 0m4.179s 00:06:06.950 user 0m4.161s 00:06:06.950 sys 0m0.780s 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.950 19:35:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.950 ************************************ 00:06:06.950 END TEST default_locks 00:06:06.950 ************************************ 00:06:06.950 19:35:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.950 19:35:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.950 19:35:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.950 19:35:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.950 ************************************ 00:06:06.950 START TEST default_locks_via_rpc 00:06:06.950 ************************************ 00:06:06.950 19:35:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:06.950 19:35:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2850901 00:06:06.950 19:35:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.950 19:35:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2850901 00:06:06.950 19:35:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2850901 ']' 00:06:06.951 19:35:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.951 19:35:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.951 19:35:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.951 19:35:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.951 19:35:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.951 [2024-10-13 19:35:56.532635] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:06.951 [2024-10-13 19:35:56.532813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850901 ] 00:06:06.951 [2024-10-13 19:35:56.658152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.209 [2024-10-13 19:35:56.789917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2850901 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2850901 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2850901 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2850901 ']' 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2850901 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.144 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2850901 00:06:08.401 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.401 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.401 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2850901' 00:06:08.401 killing process with pid 2850901 00:06:08.401 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2850901 00:06:08.401 19:35:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2850901 00:06:10.930 00:06:10.930 real 0m3.939s 00:06:10.930 user 0m3.962s 00:06:10.930 sys 0m0.712s 00:06:10.930 19:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.930 19:36:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.930 ************************************ 00:06:10.930 END TEST default_locks_via_rpc 00:06:10.930 ************************************ 00:06:10.930 19:36:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:10.930 19:36:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.930 19:36:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.930 19:36:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.930 ************************************ 00:06:10.930 START TEST non_locking_app_on_locked_coremask 00:06:10.930 ************************************ 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2851399 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2851399 /var/tmp/spdk.sock 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2851399 ']' 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.930 19:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.930 [2024-10-13 19:36:00.524364] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:10.930 [2024-10-13 19:36:00.524581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851399 ] 00:06:10.930 [2024-10-13 19:36:00.656531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.189 [2024-10-13 19:36:00.787253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2851702 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2851702 /var/tmp/spdk2.sock 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2851702 ']' 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.124 19:36:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.124 [2024-10-13 19:36:01.843090] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:12.124 [2024-10-13 19:36:01.843248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851702 ] 00:06:12.382 [2024-10-13 19:36:02.037010] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.382 [2024-10-13 19:36:02.037083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.640 [2024-10-13 19:36:02.315613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.168 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.168 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.168 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2851399 00:06:15.168 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2851399 00:06:15.168 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.168 lslocks: write error 00:06:15.168 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2851399 00:06:15.169 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2851399 ']' 00:06:15.169 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2851399 00:06:15.169 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.169 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.426 19:36:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2851399 00:06:15.427 19:36:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.427 19:36:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.427 19:36:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2851399' 00:06:15.427 killing process with pid 2851399 00:06:15.427 19:36:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2851399 00:06:15.427 19:36:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2851399 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2851702 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2851702 ']' 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2851702 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2851702 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2851702' 00:06:20.688 killing process with pid 2851702 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2851702 00:06:20.688 19:36:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2851702 00:06:22.589 00:06:22.589 real 0m11.886s 00:06:22.589 user 0m12.337s 00:06:22.589 sys 0m1.480s 00:06:22.589 19:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.589 19:36:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.589 ************************************ 00:06:22.589 END TEST non_locking_app_on_locked_coremask 00:06:22.589 ************************************ 00:06:22.589 19:36:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:22.589 19:36:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.589 19:36:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.589 19:36:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.589 ************************************ 00:06:22.589 START TEST locking_app_on_unlocked_coremask 00:06:22.589 ************************************ 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2853450 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2853450 /var/tmp/spdk.sock 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2853450 ']' 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.589 19:36:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.847 [2024-10-13 19:36:12.452609] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:22.847 [2024-10-13 19:36:12.452773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853450 ] 00:06:22.847 [2024-10-13 19:36:12.586488] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.847 [2024-10-13 19:36:12.586551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.105 [2024-10-13 19:36:12.724572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2853590 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2853590 /var/tmp/spdk2.sock 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2853590 ']' 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.039 19:36:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.039 [2024-10-13 19:36:13.777028] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:24.039 [2024-10-13 19:36:13.777180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853590 ] 00:06:24.297 [2024-10-13 19:36:13.966862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.555 [2024-10-13 19:36:14.246053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.083 19:36:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.084 19:36:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.084 19:36:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2853590 00:06:27.084 19:36:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2853590 00:06:27.084 19:36:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.342 lslocks: write error 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2853450 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2853450 ']' 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2853450 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2853450 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2853450' 00:06:27.342 killing process with pid 2853450 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2853450 00:06:27.342 19:36:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2853450 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2853590 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2853590 ']' 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2853590 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2853590 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2853590' 00:06:32.608 killing process with pid 2853590 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2853590 00:06:32.608 19:36:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2853590 00:06:35.137 00:06:35.137 real 0m12.167s 00:06:35.137 user 0m12.484s 00:06:35.137 sys 0m1.548s 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.137 ************************************ 00:06:35.137 END TEST locking_app_on_unlocked_coremask 00:06:35.137 ************************************ 00:06:35.137 19:36:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.137 19:36:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.137 19:36:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.137 19:36:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.137 ************************************ 00:06:35.137 START TEST locking_app_on_locked_coremask 00:06:35.137 ************************************ 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2854947 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2854947 /var/tmp/spdk.sock 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2854947 ']' 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.137 19:36:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.137 [2024-10-13 19:36:24.676943] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:35.137 [2024-10-13 19:36:24.677117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854947 ] 00:06:35.137 [2024-10-13 19:36:24.812784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.137 [2024-10-13 19:36:24.952756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2855085 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2855085 /var/tmp/spdk2.sock 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2855085 /var/tmp/spdk2.sock 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.521 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:36.522 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.522 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2855085 /var/tmp/spdk2.sock 00:06:36.522 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2855085 ']' 00:06:36.522 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.522 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.522 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.522 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.522 19:36:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.522 [2024-10-13 19:36:26.013231] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:36.522 [2024-10-13 19:36:26.013402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855085 ] 00:06:36.522 [2024-10-13 19:36:26.193981] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2854947 has claimed it. 00:06:36.522 [2024-10-13 19:36:26.194079] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2855085) - No such process 00:06:37.087 ERROR: process (pid: 2855085) is no longer running 00:06:37.087 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.087 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:37.087 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:37.087 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.087 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.088 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.088 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2854947 00:06:37.088 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2854947 00:06:37.088 19:36:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.345 lslocks: write error 00:06:37.345 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2854947 00:06:37.345 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2854947 ']' 00:06:37.345 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2854947 00:06:37.345 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:37.345 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.345 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2854947 00:06:37.603 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.603 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.603 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2854947' 00:06:37.603 killing process with pid 2854947 00:06:37.603 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2854947 00:06:37.603 19:36:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2854947 00:06:40.131 00:06:40.131 real 0m5.050s 00:06:40.131 user 0m5.305s 00:06:40.131 sys 0m0.939s 00:06:40.131 19:36:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.131 19:36:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.131 ************************************ 00:06:40.131 END TEST locking_app_on_locked_coremask 00:06:40.131 ************************************ 00:06:40.131 19:36:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:40.131 19:36:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.131 19:36:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.131 19:36:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.131 ************************************ 00:06:40.131 START TEST locking_overlapped_coremask 00:06:40.131 ************************************ 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2855524 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2855524 /var/tmp/spdk.sock 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2855524 ']' 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.131 19:36:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.131 [2024-10-13 19:36:29.775025] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:40.131 [2024-10-13 19:36:29.775170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855524 ] 00:06:40.131 [2024-10-13 19:36:29.904290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.389 [2024-10-13 19:36:30.044891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.389 [2024-10-13 19:36:30.044952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.389 [2024-10-13 19:36:30.044960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2855672 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2855672 /var/tmp/spdk2.sock 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2855672 /var/tmp/spdk2.sock 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2855672 /var/tmp/spdk2.sock 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2855672 ']' 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.380 19:36:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.380 [2024-10-13 19:36:31.004041] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:41.380 [2024-10-13 19:36:31.004207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855672 ] 00:06:41.380 [2024-10-13 19:36:31.191053] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2855524 has claimed it. 00:06:41.380 [2024-10-13 19:36:31.191133] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2855672) - No such process 00:06:41.949 ERROR: process (pid: 2855672) is no longer running 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2855524 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2855524 ']' 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2855524 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2855524 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2855524' 00:06:41.949 killing process with pid 2855524 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2855524 00:06:41.949 19:36:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2855524 00:06:44.477 00:06:44.477 real 0m4.231s 00:06:44.477 user 0m11.563s 00:06:44.477 sys 0m0.770s 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.477 ************************************ 00:06:44.477 END TEST locking_overlapped_coremask 00:06:44.477 ************************************ 00:06:44.477 19:36:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.477 19:36:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.477 19:36:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.477 19:36:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.477 ************************************ 00:06:44.477 START TEST locking_overlapped_coremask_via_rpc 00:06:44.477 ************************************ 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2856105 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2856105 /var/tmp/spdk.sock 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2856105 ']' 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.477 19:36:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.477 [2024-10-13 19:36:34.060279] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:44.477 [2024-10-13 19:36:34.060455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856105 ] 00:06:44.477 [2024-10-13 19:36:34.194805] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.477 [2024-10-13 19:36:34.194858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.735 [2024-10-13 19:36:34.333030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.735 [2024-10-13 19:36:34.333086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.735 [2024-10-13 19:36:34.333094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2856253 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2856253 /var/tmp/spdk2.sock 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2856253 ']' 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.669 19:36:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.669 [2024-10-13 19:36:35.388037] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:45.669 [2024-10-13 19:36:35.388187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856253 ] 00:06:45.927 [2024-10-13 19:36:35.568947] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.927 [2024-10-13 19:36:35.569006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.185 [2024-10-13 19:36:35.830276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.185 [2024-10-13 19:36:35.830327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.185 [2024-10-13 19:36:35.830319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.712 [2024-10-13 19:36:38.108582] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2856105 has claimed it. 00:06:48.712 request: 00:06:48.712 { 00:06:48.712 "method": "framework_enable_cpumask_locks", 00:06:48.712 "req_id": 1 00:06:48.712 } 00:06:48.712 Got JSON-RPC error response 00:06:48.712 response: 00:06:48.712 { 00:06:48.712 "code": -32603, 00:06:48.712 "message": "Failed to claim CPU core: 2" 00:06:48.712 } 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2856105 /var/tmp/spdk.sock 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2856105 ']' 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2856253 /var/tmp/spdk2.sock 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2856253 ']' 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.712 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.970 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.970 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.970 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:48.970 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.970 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.970 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.970 00:06:48.970 real 0m4.689s 00:06:48.970 user 0m1.560s 00:06:48.970 sys 0m0.263s 00:06:48.970 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.970 19:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.970 ************************************ 00:06:48.970 END TEST locking_overlapped_coremask_via_rpc 00:06:48.970 ************************************ 00:06:48.970 19:36:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:48.970 19:36:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2856105 ]] 00:06:48.970 19:36:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2856105 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2856105 ']' 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2856105 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856105 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856105' 00:06:48.970 killing process with pid 2856105 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2856105 00:06:48.970 19:36:38 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2856105 00:06:51.499 19:36:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2856253 ]] 00:06:51.499 19:36:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2856253 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2856253 ']' 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2856253 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856253 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856253' 00:06:51.499 killing process with pid 2856253 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2856253 00:06:51.499 19:36:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2856253 00:06:53.399 19:36:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.399 19:36:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.399 19:36:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2856105 ]] 00:06:53.399 19:36:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2856105 00:06:53.399 19:36:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2856105 ']' 00:06:53.399 19:36:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2856105 00:06:53.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2856105) - No such process 00:06:53.399 19:36:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2856105 is not found' 00:06:53.399 Process with pid 2856105 is not found 00:06:53.399 19:36:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2856253 ]] 00:06:53.399 19:36:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2856253 00:06:53.399 19:36:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2856253 ']' 00:06:53.399 19:36:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2856253 00:06:53.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2856253) - No such process 00:06:53.399 19:36:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2856253 is not found' 00:06:53.399 Process with pid 2856253 is not found 00:06:53.399 19:36:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.399 00:06:53.399 real 0m51.081s 00:06:53.399 user 1m26.801s 00:06:53.399 sys 0m7.812s 00:06:53.399 19:36:43 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.399 19:36:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.399 ************************************ 00:06:53.399 END TEST cpu_locks 00:06:53.399 ************************************ 00:06:53.399 00:06:53.399 real 1m21.143s 00:06:53.399 user 2m27.111s 00:06:53.399 sys 0m12.386s 00:06:53.399 19:36:43 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.399 19:36:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.399 ************************************ 00:06:53.399 END TEST event 00:06:53.399 ************************************ 00:06:53.399 19:36:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.399 19:36:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.399 19:36:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.399 19:36:43 -- common/autotest_common.sh@10 -- # set +x 00:06:53.399 ************************************ 00:06:53.399 START TEST thread 00:06:53.399 ************************************ 00:06:53.399 19:36:43 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.658 * Looking for test storage... 00:06:53.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:53.658 19:36:43 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:53.658 19:36:43 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:53.658 19:36:43 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:53.658 19:36:43 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:53.658 19:36:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.658 19:36:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.658 19:36:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.658 19:36:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.658 19:36:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.658 19:36:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.658 19:36:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.658 19:36:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.658 19:36:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.658 19:36:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.658 19:36:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.658 19:36:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:53.658 19:36:43 thread -- scripts/common.sh@345 -- # : 1 00:06:53.658 19:36:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.658 19:36:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.658 19:36:43 thread -- scripts/common.sh@365 -- # decimal 1 00:06:53.658 19:36:43 thread -- scripts/common.sh@353 -- # local d=1 00:06:53.658 19:36:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.658 19:36:43 thread -- scripts/common.sh@355 -- # echo 1 00:06:53.658 19:36:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.659 19:36:43 thread -- scripts/common.sh@366 -- # decimal 2 00:06:53.659 19:36:43 thread -- scripts/common.sh@353 -- # local d=2 00:06:53.659 19:36:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.659 19:36:43 thread -- scripts/common.sh@355 -- # echo 2 00:06:53.659 19:36:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.659 19:36:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.659 19:36:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.659 19:36:43 thread -- scripts/common.sh@368 -- # return 0 00:06:53.659 19:36:43 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.659 19:36:43 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:53.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.659 --rc genhtml_branch_coverage=1 00:06:53.659 --rc genhtml_function_coverage=1 00:06:53.659 --rc genhtml_legend=1 00:06:53.659 --rc geninfo_all_blocks=1 00:06:53.659 --rc geninfo_unexecuted_blocks=1 00:06:53.659 00:06:53.659 ' 00:06:53.659 19:36:43 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:53.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.659 --rc genhtml_branch_coverage=1 00:06:53.659 --rc genhtml_function_coverage=1 00:06:53.659 --rc genhtml_legend=1 00:06:53.659 --rc geninfo_all_blocks=1 00:06:53.659 --rc geninfo_unexecuted_blocks=1 00:06:53.659 00:06:53.659 ' 00:06:53.659 19:36:43 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:53.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.659 --rc genhtml_branch_coverage=1 00:06:53.659 --rc genhtml_function_coverage=1 00:06:53.659 --rc genhtml_legend=1 00:06:53.659 --rc geninfo_all_blocks=1 00:06:53.659 --rc geninfo_unexecuted_blocks=1 00:06:53.659 00:06:53.659 ' 00:06:53.659 19:36:43 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:53.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.659 --rc genhtml_branch_coverage=1 00:06:53.659 --rc genhtml_function_coverage=1 00:06:53.659 --rc genhtml_legend=1 00:06:53.659 --rc geninfo_all_blocks=1 00:06:53.659 --rc geninfo_unexecuted_blocks=1 00:06:53.659 00:06:53.659 ' 00:06:53.659 19:36:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.659 19:36:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:53.659 19:36:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.659 19:36:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.659 ************************************ 00:06:53.659 START TEST thread_poller_perf 00:06:53.659 ************************************ 00:06:53.659 19:36:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.659 [2024-10-13 19:36:43.385966] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:53.659 [2024-10-13 19:36:43.386098] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857295 ] 00:06:53.917 [2024-10-13 19:36:43.525135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.917 [2024-10-13 19:36:43.664456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.917 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.290 [2024-10-13T17:36:45.105Z] ====================================== 00:06:55.290 [2024-10-13T17:36:45.105Z] busy:2718397513 (cyc) 00:06:55.290 [2024-10-13T17:36:45.105Z] total_run_count: 283000 00:06:55.290 [2024-10-13T17:36:45.105Z] tsc_hz: 2700000000 (cyc) 00:06:55.290 [2024-10-13T17:36:45.105Z] ====================================== 00:06:55.290 [2024-10-13T17:36:45.105Z] poller_cost: 9605 (cyc), 3557 (nsec) 00:06:55.290 00:06:55.290 real 0m1.579s 00:06:55.290 user 0m1.423s 00:06:55.290 sys 0m0.148s 00:06:55.290 19:36:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.290 19:36:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.290 ************************************ 00:06:55.290 END TEST thread_poller_perf 00:06:55.290 ************************************ 00:06:55.290 19:36:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.290 19:36:44 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:55.290 19:36:44 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.290 19:36:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.290 ************************************ 00:06:55.290 START TEST thread_poller_perf 00:06:55.290 ************************************ 00:06:55.290 19:36:44 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.290 [2024-10-13 19:36:45.017161] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:55.290 [2024-10-13 19:36:45.017276] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857570 ] 00:06:55.548 [2024-10-13 19:36:45.148576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.548 [2024-10-13 19:36:45.286634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.548 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.920 [2024-10-13T17:36:46.735Z] ====================================== 00:06:56.920 [2024-10-13T17:36:46.735Z] busy:2705138167 (cyc) 00:06:56.920 [2024-10-13T17:36:46.735Z] total_run_count: 3733000 00:06:56.920 [2024-10-13T17:36:46.735Z] tsc_hz: 2700000000 (cyc) 00:06:56.920 [2024-10-13T17:36:46.735Z] ====================================== 00:06:56.920 [2024-10-13T17:36:46.735Z] poller_cost: 724 (cyc), 268 (nsec) 00:06:56.920 00:06:56.920 real 0m1.560s 00:06:56.920 user 0m1.416s 00:06:56.920 sys 0m0.136s 00:06:56.920 19:36:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.920 19:36:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.920 ************************************ 00:06:56.920 END TEST thread_poller_perf 00:06:56.920 ************************************ 00:06:56.920 19:36:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.920 00:06:56.920 real 0m3.373s 00:06:56.920 user 0m2.960s 00:06:56.920 sys 0m0.412s 00:06:56.920 19:36:46 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.920 19:36:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.920 ************************************ 00:06:56.920 END TEST thread 00:06:56.920 ************************************ 00:06:56.920 19:36:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:56.920 19:36:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:56.920 19:36:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.920 19:36:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.920 19:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:56.920 ************************************ 00:06:56.920 START TEST app_cmdline 00:06:56.920 ************************************ 00:06:56.920 19:36:46 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:56.920 * Looking for test storage... 00:06:56.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:56.920 19:36:46 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:56.920 19:36:46 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:56.920 19:36:46 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.179 19:36:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.179 --rc genhtml_branch_coverage=1 00:06:57.179 --rc genhtml_function_coverage=1 00:06:57.179 --rc genhtml_legend=1 00:06:57.179 --rc geninfo_all_blocks=1 00:06:57.179 --rc geninfo_unexecuted_blocks=1 00:06:57.179 00:06:57.179 ' 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.179 --rc genhtml_branch_coverage=1 00:06:57.179 --rc genhtml_function_coverage=1 00:06:57.179 --rc genhtml_legend=1 00:06:57.179 --rc geninfo_all_blocks=1 00:06:57.179 --rc geninfo_unexecuted_blocks=1 00:06:57.179 00:06:57.179 ' 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.179 --rc genhtml_branch_coverage=1 00:06:57.179 --rc genhtml_function_coverage=1 00:06:57.179 --rc genhtml_legend=1 00:06:57.179 --rc geninfo_all_blocks=1 00:06:57.179 --rc geninfo_unexecuted_blocks=1 00:06:57.179 00:06:57.179 ' 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.179 --rc genhtml_branch_coverage=1 00:06:57.179 --rc genhtml_function_coverage=1 00:06:57.179 --rc genhtml_legend=1 00:06:57.179 --rc geninfo_all_blocks=1 00:06:57.179 --rc geninfo_unexecuted_blocks=1 00:06:57.179 00:06:57.179 ' 00:06:57.179 19:36:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:57.179 19:36:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2857783 00:06:57.179 19:36:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:57.179 19:36:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2857783 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2857783 ']' 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.179 19:36:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.179 [2024-10-13 19:36:46.854862] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:57.179 [2024-10-13 19:36:46.855031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857783 ] 00:06:57.179 [2024-10-13 19:36:46.992659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.437 [2024-10-13 19:36:47.129517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.370 19:36:48 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.370 19:36:48 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:58.370 19:36:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:58.628 { 00:06:58.628 "version": "SPDK v25.01-pre git sha1 bbce7a874", 00:06:58.628 "fields": { 00:06:58.628 "major": 25, 00:06:58.628 "minor": 1, 00:06:58.628 "patch": 0, 00:06:58.628 "suffix": "-pre", 00:06:58.628 "commit": "bbce7a874" 00:06:58.628 } 00:06:58.628 } 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:58.628 19:36:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:58.628 19:36:48 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.886 request: 00:06:58.886 { 00:06:58.886 "method": "env_dpdk_get_mem_stats", 00:06:58.886 "req_id": 1 00:06:58.886 } 00:06:58.886 Got JSON-RPC error response 00:06:58.886 response: 00:06:58.886 { 00:06:58.886 "code": -32601, 00:06:58.886 "message": "Method not found" 00:06:58.886 } 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.886 19:36:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2857783 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2857783 ']' 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2857783 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2857783 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2857783' 00:06:58.886 killing process with pid 2857783 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@969 -- # kill 2857783 00:06:58.886 19:36:48 app_cmdline -- common/autotest_common.sh@974 -- # wait 2857783 00:07:01.415 00:07:01.415 real 0m4.541s 00:07:01.415 user 0m4.919s 00:07:01.415 sys 0m0.742s 00:07:01.415 19:36:51 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.415 19:36:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.415 ************************************ 00:07:01.415 END TEST app_cmdline 00:07:01.415 ************************************ 00:07:01.415 19:36:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.415 19:36:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.415 19:36:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.415 19:36:51 -- common/autotest_common.sh@10 -- # set +x 00:07:01.415 ************************************ 00:07:01.415 START TEST version 00:07:01.415 ************************************ 00:07:01.415 19:36:51 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.674 * Looking for test storage... 00:07:01.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.674 19:36:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.674 19:36:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.674 19:36:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.674 19:36:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.674 19:36:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.674 19:36:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.674 19:36:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.674 19:36:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.674 19:36:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.674 19:36:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.674 19:36:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.674 19:36:51 version -- scripts/common.sh@344 -- # case "$op" in 00:07:01.674 19:36:51 version -- scripts/common.sh@345 -- # : 1 00:07:01.674 19:36:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.674 19:36:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.674 19:36:51 version -- scripts/common.sh@365 -- # decimal 1 00:07:01.674 19:36:51 version -- scripts/common.sh@353 -- # local d=1 00:07:01.674 19:36:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.674 19:36:51 version -- scripts/common.sh@355 -- # echo 1 00:07:01.674 19:36:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.674 19:36:51 version -- scripts/common.sh@366 -- # decimal 2 00:07:01.674 19:36:51 version -- scripts/common.sh@353 -- # local d=2 00:07:01.674 19:36:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.674 19:36:51 version -- scripts/common.sh@355 -- # echo 2 00:07:01.674 19:36:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.674 19:36:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.674 19:36:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.674 19:36:51 version -- scripts/common.sh@368 -- # return 0 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.674 --rc genhtml_branch_coverage=1 00:07:01.674 --rc genhtml_function_coverage=1 00:07:01.674 --rc genhtml_legend=1 00:07:01.674 --rc geninfo_all_blocks=1 00:07:01.674 --rc geninfo_unexecuted_blocks=1 00:07:01.674 00:07:01.674 ' 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.674 --rc genhtml_branch_coverage=1 00:07:01.674 --rc genhtml_function_coverage=1 00:07:01.674 --rc genhtml_legend=1 00:07:01.674 --rc geninfo_all_blocks=1 00:07:01.674 --rc geninfo_unexecuted_blocks=1 00:07:01.674 00:07:01.674 ' 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.674 --rc genhtml_branch_coverage=1 00:07:01.674 --rc genhtml_function_coverage=1 00:07:01.674 --rc genhtml_legend=1 00:07:01.674 --rc geninfo_all_blocks=1 00:07:01.674 --rc geninfo_unexecuted_blocks=1 00:07:01.674 00:07:01.674 ' 00:07:01.674 19:36:51 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.674 --rc genhtml_branch_coverage=1 00:07:01.674 --rc genhtml_function_coverage=1 00:07:01.674 --rc genhtml_legend=1 00:07:01.674 --rc geninfo_all_blocks=1 00:07:01.674 --rc geninfo_unexecuted_blocks=1 00:07:01.674 00:07:01.674 ' 00:07:01.674 19:36:51 version -- app/version.sh@17 -- # get_header_version major 00:07:01.674 19:36:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.674 19:36:51 version -- app/version.sh@14 -- # cut -f2 00:07:01.674 19:36:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.674 19:36:51 version -- app/version.sh@17 -- # major=25 00:07:01.674 19:36:51 version -- app/version.sh@18 -- # get_header_version minor 00:07:01.674 19:36:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.674 19:36:51 version -- app/version.sh@14 -- # cut -f2 00:07:01.674 19:36:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.674 19:36:51 version -- app/version.sh@18 -- # minor=1 00:07:01.674 19:36:51 version -- app/version.sh@19 -- # get_header_version patch 00:07:01.674 19:36:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.674 19:36:51 version -- app/version.sh@14 -- # cut -f2 00:07:01.674 19:36:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.674 19:36:51 version -- app/version.sh@19 -- # patch=0 00:07:01.674 19:36:51 version -- app/version.sh@20 -- # get_header_version suffix 00:07:01.674 19:36:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.674 19:36:51 version -- app/version.sh@14 -- # cut -f2 00:07:01.674 19:36:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.674 19:36:51 version -- app/version.sh@20 -- # suffix=-pre 00:07:01.674 19:36:51 version -- app/version.sh@22 -- # version=25.1 00:07:01.674 19:36:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:01.674 19:36:51 version -- app/version.sh@28 -- # version=25.1rc0 00:07:01.674 19:36:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:01.674 19:36:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.674 19:36:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:01.674 19:36:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:01.674 00:07:01.674 real 0m0.200s 00:07:01.674 user 0m0.135s 00:07:01.675 sys 0m0.089s 00:07:01.675 19:36:51 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.675 19:36:51 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.675 ************************************ 00:07:01.675 END TEST version 00:07:01.675 ************************************ 00:07:01.675 19:36:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:01.675 19:36:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:01.675 19:36:51 -- spdk/autotest.sh@194 -- # uname -s 00:07:01.675 19:36:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:01.675 19:36:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.675 19:36:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.675 19:36:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:01.675 19:36:51 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:01.675 19:36:51 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:01.675 19:36:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:01.675 19:36:51 -- common/autotest_common.sh@10 -- # set +x 00:07:01.675 19:36:51 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:01.675 19:36:51 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:01.675 19:36:51 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:01.675 19:36:51 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:01.675 19:36:51 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:01.675 19:36:51 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:01.675 19:36:51 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.675 19:36:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.675 19:36:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.675 19:36:51 -- common/autotest_common.sh@10 -- # set +x 00:07:01.675 ************************************ 00:07:01.675 START TEST nvmf_tcp 00:07:01.675 ************************************ 00:07:01.675 19:36:51 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.933 * Looking for test storage... 00:07:01.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.933 19:36:51 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.933 --rc genhtml_branch_coverage=1 00:07:01.933 --rc genhtml_function_coverage=1 00:07:01.933 --rc genhtml_legend=1 00:07:01.933 --rc geninfo_all_blocks=1 00:07:01.933 --rc geninfo_unexecuted_blocks=1 00:07:01.933 00:07:01.933 ' 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.933 --rc genhtml_branch_coverage=1 00:07:01.933 --rc genhtml_function_coverage=1 00:07:01.933 --rc genhtml_legend=1 00:07:01.933 --rc geninfo_all_blocks=1 00:07:01.933 --rc geninfo_unexecuted_blocks=1 00:07:01.933 00:07:01.933 ' 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.933 --rc genhtml_branch_coverage=1 00:07:01.933 --rc genhtml_function_coverage=1 00:07:01.933 --rc genhtml_legend=1 00:07:01.933 --rc geninfo_all_blocks=1 00:07:01.933 --rc geninfo_unexecuted_blocks=1 00:07:01.933 00:07:01.933 ' 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.933 --rc genhtml_branch_coverage=1 00:07:01.933 --rc genhtml_function_coverage=1 00:07:01.933 --rc genhtml_legend=1 00:07:01.933 --rc geninfo_all_blocks=1 00:07:01.933 --rc geninfo_unexecuted_blocks=1 00:07:01.933 00:07:01.933 ' 00:07:01.933 19:36:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:01.933 19:36:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:01.933 19:36:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.933 19:36:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.933 ************************************ 00:07:01.933 START TEST nvmf_target_core 00:07:01.934 ************************************ 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:01.934 * Looking for test storage... 00:07:01.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.934 --rc genhtml_branch_coverage=1 00:07:01.934 --rc genhtml_function_coverage=1 00:07:01.934 --rc genhtml_legend=1 00:07:01.934 --rc geninfo_all_blocks=1 00:07:01.934 --rc geninfo_unexecuted_blocks=1 00:07:01.934 00:07:01.934 ' 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.934 --rc genhtml_branch_coverage=1 00:07:01.934 --rc genhtml_function_coverage=1 00:07:01.934 --rc genhtml_legend=1 00:07:01.934 --rc geninfo_all_blocks=1 00:07:01.934 --rc geninfo_unexecuted_blocks=1 00:07:01.934 00:07:01.934 ' 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.934 --rc genhtml_branch_coverage=1 00:07:01.934 --rc genhtml_function_coverage=1 00:07:01.934 --rc genhtml_legend=1 00:07:01.934 --rc geninfo_all_blocks=1 00:07:01.934 --rc geninfo_unexecuted_blocks=1 00:07:01.934 00:07:01.934 ' 00:07:01.934 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.934 --rc genhtml_branch_coverage=1 00:07:01.934 --rc genhtml_function_coverage=1 00:07:01.934 --rc genhtml_legend=1 00:07:01.934 --rc geninfo_all_blocks=1 00:07:01.934 --rc geninfo_unexecuted_blocks=1 00:07:01.934 00:07:01.934 ' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 ************************************ 00:07:02.194 START TEST nvmf_abort 00:07:02.194 ************************************ 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:02.194 * Looking for test storage... 00:07:02.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:02.194 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.195 --rc genhtml_branch_coverage=1 00:07:02.195 --rc genhtml_function_coverage=1 00:07:02.195 --rc genhtml_legend=1 00:07:02.195 --rc geninfo_all_blocks=1 00:07:02.195 --rc geninfo_unexecuted_blocks=1 00:07:02.195 00:07:02.195 ' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.195 --rc genhtml_branch_coverage=1 00:07:02.195 --rc genhtml_function_coverage=1 00:07:02.195 --rc genhtml_legend=1 00:07:02.195 --rc geninfo_all_blocks=1 00:07:02.195 --rc geninfo_unexecuted_blocks=1 00:07:02.195 00:07:02.195 ' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.195 --rc genhtml_branch_coverage=1 00:07:02.195 --rc genhtml_function_coverage=1 00:07:02.195 --rc genhtml_legend=1 00:07:02.195 --rc geninfo_all_blocks=1 00:07:02.195 --rc geninfo_unexecuted_blocks=1 00:07:02.195 00:07:02.195 ' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.195 --rc genhtml_branch_coverage=1 00:07:02.195 --rc genhtml_function_coverage=1 00:07:02.195 --rc genhtml_legend=1 00:07:02.195 --rc geninfo_all_blocks=1 00:07:02.195 --rc geninfo_unexecuted_blocks=1 00:07:02.195 00:07:02.195 ' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:02.195 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:04.730 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:04.731 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:04.731 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:04.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:04.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.731 19:36:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:07:04.731 00:07:04.731 --- 10.0.0.2 ping statistics --- 00:07:04.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.731 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:07:04.731 00:07:04.731 --- 10.0.0.1 ping statistics --- 00:07:04.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.731 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.731 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2860259 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2860259 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2860259 ']' 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.732 19:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.732 [2024-10-13 19:36:54.217567] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:04.732 [2024-10-13 19:36:54.217722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.732 [2024-10-13 19:36:54.362832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.732 [2024-10-13 19:36:54.508467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.732 [2024-10-13 19:36:54.508541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.732 [2024-10-13 19:36:54.508568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.732 [2024-10-13 19:36:54.508592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.732 [2024-10-13 19:36:54.508613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.732 [2024-10-13 19:36:54.511279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.732 [2024-10-13 19:36:54.511446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.732 [2024-10-13 19:36:54.511450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.666 [2024-10-13 19:36:55.192483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.666 Malloc0 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.666 Delay0 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.666 [2024-10-13 19:36:55.324429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.666 19:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:05.666 [2024-10-13 19:36:55.471591] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:08.193 Initializing NVMe Controllers 00:07:08.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:08.193 controller IO queue size 128 less than required 00:07:08.193 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:08.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:08.193 Initialization complete. Launching workers. 00:07:08.193 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22057 00:07:08.193 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22114, failed to submit 66 00:07:08.193 success 22057, unsuccessful 57, failed 0 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.193 rmmod nvme_tcp 00:07:08.193 rmmod nvme_fabrics 00:07:08.193 rmmod nvme_keyring 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2860259 ']' 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2860259 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2860259 ']' 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2860259 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2860259 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2860259' 00:07:08.193 killing process with pid 2860259 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2860259 00:07:08.193 19:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2860259 00:07:09.566 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:09.566 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:09.566 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:09.566 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:09.566 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:07:09.566 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:09.566 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:07:09.566 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.567 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.567 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.567 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.567 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.472 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.473 00:07:11.473 real 0m9.304s 00:07:11.473 user 0m15.562s 00:07:11.473 sys 0m2.800s 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.473 ************************************ 00:07:11.473 END TEST nvmf_abort 00:07:11.473 ************************************ 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.473 ************************************ 00:07:11.473 START TEST nvmf_ns_hotplug_stress 00:07:11.473 ************************************ 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.473 * Looking for test storage... 00:07:11.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:11.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.473 --rc genhtml_branch_coverage=1 00:07:11.473 --rc genhtml_function_coverage=1 00:07:11.473 --rc genhtml_legend=1 00:07:11.473 --rc geninfo_all_blocks=1 00:07:11.473 --rc geninfo_unexecuted_blocks=1 00:07:11.473 00:07:11.473 ' 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:11.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.473 --rc genhtml_branch_coverage=1 00:07:11.473 --rc genhtml_function_coverage=1 00:07:11.473 --rc genhtml_legend=1 00:07:11.473 --rc geninfo_all_blocks=1 00:07:11.473 --rc geninfo_unexecuted_blocks=1 00:07:11.473 00:07:11.473 ' 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:11.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.473 --rc genhtml_branch_coverage=1 00:07:11.473 --rc genhtml_function_coverage=1 00:07:11.473 --rc genhtml_legend=1 00:07:11.473 --rc geninfo_all_blocks=1 00:07:11.473 --rc geninfo_unexecuted_blocks=1 00:07:11.473 00:07:11.473 ' 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:11.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.473 --rc genhtml_branch_coverage=1 00:07:11.473 --rc genhtml_function_coverage=1 00:07:11.473 --rc genhtml_legend=1 00:07:11.473 --rc geninfo_all_blocks=1 00:07:11.473 --rc geninfo_unexecuted_blocks=1 00:07:11.473 00:07:11.473 ' 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.473 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.731 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.732 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.692 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:13.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:13.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:13.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:13.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:07:13.693 00:07:13.693 --- 10.0.0.2 ping statistics --- 00:07:13.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.693 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:07:13.693 00:07:13.693 --- 10.0.0.1 ping statistics --- 00:07:13.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.693 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2862769 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2862769 00:07:13.693 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2862769 ']' 00:07:13.694 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:13.694 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.694 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.694 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.694 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.694 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.694 [2024-10-13 19:37:03.504875] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:13.694 [2024-10-13 19:37:03.505021] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.953 [2024-10-13 19:37:03.649962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.211 [2024-10-13 19:37:03.795088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.211 [2024-10-13 19:37:03.795161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.211 [2024-10-13 19:37:03.795186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.211 [2024-10-13 19:37:03.795211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.211 [2024-10-13 19:37:03.795231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.211 [2024-10-13 19:37:03.797991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.211 [2024-10-13 19:37:03.798041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.211 [2024-10-13 19:37:03.798039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.777 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.777 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:14.777 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:14.777 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:14.777 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:14.777 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.777 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:14.777 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:15.035 [2024-10-13 19:37:04.710258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.035 19:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:15.293 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.550 [2024-10-13 19:37:05.264316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.550 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:15.808 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:16.066 Malloc0 00:07:16.067 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:16.325 Delay0 00:07:16.325 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.891 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:16.891 NULL1 00:07:16.891 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:17.148 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2863202 00:07:17.148 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:17.148 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:17.148 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.714 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.714 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:17.714 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:17.971 true 00:07:17.971 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:17.971 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.537 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.537 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:18.537 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:18.794 true 00:07:18.795 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:18.795 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.360 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.360 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:19.360 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:19.618 true 00:07:19.618 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:19.618 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.551 Read completed with error (sct=0, sc=11) 00:07:20.551 19:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.116 19:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:21.116 19:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:21.116 true 00:07:21.116 19:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:21.116 19:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.374 19:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.632 19:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:21.632 19:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:21.890 true 00:07:22.148 19:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:22.148 19:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.081 19:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.081 19:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:23.082 19:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:23.340 true 00:07:23.340 19:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:23.340 19:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.597 19:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.855 19:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:23.855 19:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:24.421 true 00:07:24.421 19:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:24.421 19:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.421 19:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.679 19:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:24.679 19:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:24.936 true 00:07:25.194 19:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:25.194 19:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.126 19:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.385 19:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:26.385 19:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:26.642 true 00:07:26.643 19:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:26.643 19:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.900 19:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.158 19:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:27.158 19:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:27.416 true 00:07:27.416 19:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:27.416 19:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.674 19:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.932 19:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:27.932 19:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:28.190 true 00:07:28.190 19:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:28.190 19:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.123 19:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.381 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:29.381 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:29.639 true 00:07:29.639 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:29.639 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.896 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.154 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:30.154 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:30.412 true 00:07:30.669 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:30.669 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.927 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.185 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:31.186 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:31.443 true 00:07:31.443 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:31.443 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.376 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.635 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:32.635 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:32.893 true 00:07:32.893 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:32.893 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.150 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.408 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:33.408 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:33.666 true 00:07:33.666 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:33.666 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.924 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.182 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:34.182 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:34.440 true 00:07:34.440 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:34.440 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.373 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.631 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:35.631 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:35.889 true 00:07:35.889 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:35.889 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.147 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.435 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:36.435 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:36.704 true 00:07:36.704 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:36.704 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.637 19:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.895 19:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:37.895 19:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:38.153 true 00:07:38.153 19:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:38.153 19:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.410 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.668 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:38.668 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:38.926 true 00:07:38.926 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:38.926 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.859 19:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.116 19:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:40.116 19:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:40.374 true 00:07:40.374 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:40.374 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.632 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.889 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:40.889 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:41.147 true 00:07:41.147 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:41.147 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.405 19:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.662 19:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:41.662 19:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:41.920 true 00:07:41.920 19:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:41.920 19:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.853 19:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.111 19:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:43.111 19:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:43.369 true 00:07:43.369 19:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:43.369 19:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.627 19:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.885 19:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:43.885 19:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:44.143 true 00:07:44.143 19:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:44.143 19:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.076 19:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.335 19:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:45.335 19:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:45.593 true 00:07:45.593 19:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:45.593 19:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.851 19:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.109 19:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:46.109 19:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:46.367 true 00:07:46.367 19:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:46.367 19:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.301 19:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.559 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:47.559 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:47.817 Initializing NVMe Controllers 00:07:47.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.817 Controller IO queue size 128, less than required. 00:07:47.817 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.817 Controller IO queue size 128, less than required. 00:07:47.817 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:47.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:47.817 Initialization complete. Launching workers. 00:07:47.817 ======================================================== 00:07:47.817 Latency(us) 00:07:47.817 Device Information : IOPS MiB/s Average min max 00:07:47.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 387.70 0.19 135006.42 3323.51 1034595.22 00:07:47.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6420.32 3.13 19873.30 4267.99 489988.73 00:07:47.817 ======================================================== 00:07:47.817 Total : 6808.02 3.32 26429.92 3323.51 1034595.22 00:07:47.817 00:07:47.817 true 00:07:47.817 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2863202 00:07:47.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2863202) - No such process 00:07:47.817 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2863202 00:07:47.817 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.075 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:48.337 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:48.337 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:48.337 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:48.337 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.337 19:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:48.594 null0 00:07:48.594 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.594 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.594 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:48.852 null1 00:07:48.852 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.852 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.852 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:49.110 null2 00:07:49.110 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.110 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.110 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:49.368 null3 00:07:49.368 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.368 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.368 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:49.626 null4 00:07:49.626 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.626 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.626 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:49.883 null5 00:07:49.883 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.883 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.883 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:50.141 null6 00:07:50.141 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.141 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.141 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:50.400 null7 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2867261 2867262 2867264 2867266 2867268 2867270 2867272 2867274 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.400 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.658 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.658 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.658 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:50.658 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.658 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.917 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.917 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.917 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.175 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.434 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.434 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.434 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.434 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.434 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.434 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.434 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.434 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.692 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.950 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.950 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.950 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.950 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.950 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.950 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.950 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.950 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.208 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.466 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.466 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.466 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.724 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.724 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.724 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.724 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.724 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.982 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.241 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.241 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.241 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.241 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.241 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.241 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.241 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.241 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.499 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.757 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.757 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.757 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.757 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.757 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.757 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.757 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.757 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.016 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.274 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.274 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.532 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.532 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.532 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.532 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.532 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.532 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.790 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.048 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.048 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.048 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.048 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.048 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.048 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.048 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.048 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.306 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.306 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.565 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.565 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.565 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.565 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.565 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.565 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.565 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.565 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.823 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.389 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.389 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.389 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.389 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.389 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.389 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.389 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.389 19:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.389 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.389 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.389 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.389 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.648 rmmod nvme_tcp 00:07:56.648 rmmod nvme_fabrics 00:07:56.648 rmmod nvme_keyring 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2862769 ']' 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2862769 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2862769 ']' 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2862769 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2862769 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2862769' 00:07:56.648 killing process with pid 2862769 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2862769 00:07:56.648 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2862769 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.023 19:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.939 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.939 00:07:59.939 real 0m48.400s 00:07:59.939 user 3m41.978s 00:07:59.939 sys 0m16.621s 00:07:59.939 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.939 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.939 ************************************ 00:07:59.939 END TEST nvmf_ns_hotplug_stress 00:07:59.939 ************************************ 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.940 ************************************ 00:07:59.940 START TEST nvmf_delete_subsystem 00:07:59.940 ************************************ 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:59.940 * Looking for test storage... 00:07:59.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.940 --rc genhtml_branch_coverage=1 00:07:59.940 --rc genhtml_function_coverage=1 00:07:59.940 --rc genhtml_legend=1 00:07:59.940 --rc geninfo_all_blocks=1 00:07:59.940 --rc geninfo_unexecuted_blocks=1 00:07:59.940 00:07:59.940 ' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.940 --rc genhtml_branch_coverage=1 00:07:59.940 --rc genhtml_function_coverage=1 00:07:59.940 --rc genhtml_legend=1 00:07:59.940 --rc geninfo_all_blocks=1 00:07:59.940 --rc geninfo_unexecuted_blocks=1 00:07:59.940 00:07:59.940 ' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.940 --rc genhtml_branch_coverage=1 00:07:59.940 --rc genhtml_function_coverage=1 00:07:59.940 --rc genhtml_legend=1 00:07:59.940 --rc geninfo_all_blocks=1 00:07:59.940 --rc geninfo_unexecuted_blocks=1 00:07:59.940 00:07:59.940 ' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.940 --rc genhtml_branch_coverage=1 00:07:59.940 --rc genhtml_function_coverage=1 00:07:59.940 --rc genhtml_legend=1 00:07:59.940 --rc geninfo_all_blocks=1 00:07:59.940 --rc geninfo_unexecuted_blocks=1 00:07:59.940 00:07:59.940 ' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.940 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.941 19:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:02.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:02.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.498 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:02.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:02.499 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:08:02.499 00:08:02.499 --- 10.0.0.2 ping statistics --- 00:08:02.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.499 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:08:02.499 00:08:02.499 --- 10.0.0.1 ping statistics --- 00:08:02.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.499 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2870290 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2870290 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2870290 ']' 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.499 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.499 [2024-10-13 19:37:51.937496] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:02.499 [2024-10-13 19:37:51.937640] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.499 [2024-10-13 19:37:52.075464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.499 [2024-10-13 19:37:52.212363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.499 [2024-10-13 19:37:52.212459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.499 [2024-10-13 19:37:52.212486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.499 [2024-10-13 19:37:52.212509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.499 [2024-10-13 19:37:52.212530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.499 [2024-10-13 19:37:52.215183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.499 [2024-10-13 19:37:52.215187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.065 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.065 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:03.065 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:03.065 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.065 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 [2024-10-13 19:37:52.898547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 [2024-10-13 19:37:52.916364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 NULL1 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 Delay0 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2870405 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:03.324 19:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:03.324 [2024-10-13 19:37:53.050809] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:05.222 19:37:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.222 19:37:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.222 19:37:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 [2024-10-13 19:37:55.191067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 Write completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 starting I/O failed: -6 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.481 Read completed with error (sct=0, sc=8) 00:08:05.482 Read completed with error (sct=0, sc=8) 00:08:05.482 Read completed with error (sct=0, sc=8) 00:08:05.482 starting I/O failed: -6 00:08:05.482 Write completed with error (sct=0, sc=8) 00:08:05.482 [2024-10-13 19:37:55.193569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:08:06.415 [2024-10-13 19:37:56.151261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 [2024-10-13 19:37:56.193455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 [2024-10-13 19:37:56.195482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Write completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.415 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 [2024-10-13 19:37:56.196236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Read completed with error (sct=0, sc=8) 00:08:06.416 Write completed with error (sct=0, sc=8) 00:08:06.416 [2024-10-13 19:37:56.196923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:08:06.416 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.416 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:06.416 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2870405 00:08:06.416 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:06.416 Initializing NVMe Controllers 00:08:06.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:06.416 Controller IO queue size 128, less than required. 00:08:06.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:06.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:06.416 Initialization complete. Launching workers. 00:08:06.416 ======================================================== 00:08:06.416 Latency(us) 00:08:06.416 Device Information : IOPS MiB/s Average min max 00:08:06.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.96 0.10 945881.41 1930.74 1016928.21 00:08:06.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.38 0.08 879639.81 804.52 1016826.66 00:08:06.416 ======================================================== 00:08:06.416 Total : 349.34 0.17 916607.50 804.52 1016928.21 00:08:06.416 00:08:06.416 [2024-10-13 19:37:56.201814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:08:06.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2870405 00:08:06.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2870405) - No such process 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2870405 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2870405 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2870405 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.982 [2024-10-13 19:37:56.720201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2870856 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2870856 00:08:06.982 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.240 [2024-10-13 19:37:56.825910] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:07.497 19:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.497 19:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2870856 00:08:07.497 19:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.062 19:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.062 19:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2870856 00:08:08.062 19:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.625 19:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.625 19:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2870856 00:08:08.625 19:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.190 19:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.190 19:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2870856 00:08:09.190 19:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.448 19:37:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.448 19:37:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2870856 00:08:09.448 19:37:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.013 19:37:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.013 19:37:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2870856 00:08:10.013 19:37:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.344 Initializing NVMe Controllers 00:08:10.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:10.344 Controller IO queue size 128, less than required. 00:08:10.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:10.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:10.344 Initialization complete. Launching workers. 00:08:10.344 ======================================================== 00:08:10.344 Latency(us) 00:08:10.344 Device Information : IOPS MiB/s Average min max 00:08:10.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005774.85 1000341.06 1041572.28 00:08:10.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005503.68 1000256.11 1013786.49 00:08:10.344 ======================================================== 00:08:10.344 Total : 256.00 0.12 1005639.27 1000256.11 1041572.28 00:08:10.344 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2870856 00:08:10.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2870856) - No such process 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2870856 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.603 rmmod nvme_tcp 00:08:10.603 rmmod nvme_fabrics 00:08:10.603 rmmod nvme_keyring 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2870290 ']' 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2870290 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2870290 ']' 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2870290 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2870290 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2870290' 00:08:10.603 killing process with pid 2870290 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2870290 00:08:10.603 19:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2870290 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.979 19:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.882 00:08:13.882 real 0m13.949s 00:08:13.882 user 0m30.683s 00:08:13.882 sys 0m3.147s 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.882 ************************************ 00:08:13.882 END TEST nvmf_delete_subsystem 00:08:13.882 ************************************ 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.882 ************************************ 00:08:13.882 START TEST nvmf_host_management 00:08:13.882 ************************************ 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:13.882 * Looking for test storage... 00:08:13.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:13.882 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:14.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.141 --rc genhtml_branch_coverage=1 00:08:14.141 --rc genhtml_function_coverage=1 00:08:14.141 --rc genhtml_legend=1 00:08:14.141 --rc geninfo_all_blocks=1 00:08:14.141 --rc geninfo_unexecuted_blocks=1 00:08:14.141 00:08:14.141 ' 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:14.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.141 --rc genhtml_branch_coverage=1 00:08:14.141 --rc genhtml_function_coverage=1 00:08:14.141 --rc genhtml_legend=1 00:08:14.141 --rc geninfo_all_blocks=1 00:08:14.141 --rc geninfo_unexecuted_blocks=1 00:08:14.141 00:08:14.141 ' 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:14.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.141 --rc genhtml_branch_coverage=1 00:08:14.141 --rc genhtml_function_coverage=1 00:08:14.141 --rc genhtml_legend=1 00:08:14.141 --rc geninfo_all_blocks=1 00:08:14.141 --rc geninfo_unexecuted_blocks=1 00:08:14.141 00:08:14.141 ' 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:14.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.141 --rc genhtml_branch_coverage=1 00:08:14.141 --rc genhtml_function_coverage=1 00:08:14.141 --rc genhtml_legend=1 00:08:14.141 --rc geninfo_all_blocks=1 00:08:14.141 --rc geninfo_unexecuted_blocks=1 00:08:14.141 00:08:14.141 ' 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.141 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.142 19:38:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:16.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:16.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:16.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:16.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.045 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:08:16.046 00:08:16.046 --- 10.0.0.2 ping statistics --- 00:08:16.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.046 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:08:16.046 00:08:16.046 --- 10.0.0.1 ping statistics --- 00:08:16.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.046 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2873348 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2873348 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2873348 ']' 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.046 19:38:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.304 [2024-10-13 19:38:05.874038] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:16.304 [2024-10-13 19:38:05.874181] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.304 [2024-10-13 19:38:06.015828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.562 [2024-10-13 19:38:06.159607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.562 [2024-10-13 19:38:06.159691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.562 [2024-10-13 19:38:06.159717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.562 [2024-10-13 19:38:06.159742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.562 [2024-10-13 19:38:06.159761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.562 [2024-10-13 19:38:06.162648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.562 [2024-10-13 19:38:06.162743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.562 [2024-10-13 19:38:06.162787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.562 [2024-10-13 19:38:06.162793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.129 [2024-10-13 19:38:06.856022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.129 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.387 Malloc0 00:08:17.387 [2024-10-13 19:38:06.986189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.387 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.387 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:17.387 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:17.387 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2873524 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2873524 /var/tmp/bdevperf.sock 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2873524 ']' 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:17.387 { 00:08:17.387 "params": { 00:08:17.387 "name": "Nvme$subsystem", 00:08:17.387 "trtype": "$TEST_TRANSPORT", 00:08:17.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.387 "adrfam": "ipv4", 00:08:17.387 "trsvcid": "$NVMF_PORT", 00:08:17.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.387 "hdgst": ${hdgst:-false}, 00:08:17.387 "ddgst": ${ddgst:-false} 00:08:17.387 }, 00:08:17.387 "method": "bdev_nvme_attach_controller" 00:08:17.387 } 00:08:17.387 EOF 00:08:17.387 )") 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:17.387 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:17.387 "params": { 00:08:17.387 "name": "Nvme0", 00:08:17.387 "trtype": "tcp", 00:08:17.387 "traddr": "10.0.0.2", 00:08:17.387 "adrfam": "ipv4", 00:08:17.387 "trsvcid": "4420", 00:08:17.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:17.387 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:17.387 "hdgst": false, 00:08:17.387 "ddgst": false 00:08:17.387 }, 00:08:17.387 "method": "bdev_nvme_attach_controller" 00:08:17.387 }' 00:08:17.387 [2024-10-13 19:38:07.106551] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:17.387 [2024-10-13 19:38:07.106688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873524 ] 00:08:17.645 [2024-10-13 19:38:07.233668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.645 [2024-10-13 19:38:07.361524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.211 Running I/O for 10 seconds... 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 [2024-10-13 19:38:08.126096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.126712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 [2024-10-13 19:38:08.136589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:18.469 [2024-10-13 19:38:08.136641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.136668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:18.469 [2024-10-13 19:38:08.136690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.136717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:18.469 [2024-10-13 19:38:08.136737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.136759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:18.469 [2024-10-13 19:38:08.136784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.136803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:08:18.469 [2024-10-13 19:38:08.137292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.469 [2024-10-13 19:38:08.137327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.137384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.469 [2024-10-13 19:38:08.137418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.137464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.469 [2024-10-13 19:38:08.137502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.137530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.469 [2024-10-13 19:38:08.137551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.137575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.469 [2024-10-13 19:38:08.137595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.137618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.469 [2024-10-13 19:38:08.137639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.137662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.469 [2024-10-13 19:38:08.137683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.469 [2024-10-13 19:38:08.137717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.469 [2024-10-13 19:38:08.137738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.137771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.137791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.137815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.137836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.137859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.137880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.137903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.137924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.137948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.137969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.137992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.470 [2024-10-13 19:38:08.138176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 19:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:18.470 [2024-10-13 19:38:08.138312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.138961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.138982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.470 [2024-10-13 19:38:08.139788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.470 [2024-10-13 19:38:08.139808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.139831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.139851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.139876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.139896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.139919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.139939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.139962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.139983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.140014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.140034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.140057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.140078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.140100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.140120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.140143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.140163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.140186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.140206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.140229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.140250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.140272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.471 [2024-10-13 19:38:08.140297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.471 [2024-10-13 19:38:08.140612] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:08:18.471 [2024-10-13 19:38:08.141852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:18.471 task offset: 49152 on job bdev=Nvme0n1 fails 00:08:18.471 00:08:18.471 Latency(us) 00:08:18.471 [2024-10-13T17:38:08.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.471 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:18.471 Job: Nvme0n1 ended in about 0.30 seconds with error 00:08:18.471 Verification LBA range: start 0x0 length 0x400 00:08:18.471 Nvme0n1 : 0.30 1275.67 79.73 212.61 0.00 41334.06 4490.43 40583.77 00:08:18.471 [2024-10-13T17:38:08.286Z] =================================================================================================================== 00:08:18.471 [2024-10-13T17:38:08.286Z] Total : 1275.67 79.73 212.61 0.00 41334.06 4490.43 40583.77 00:08:18.471 [2024-10-13 19:38:08.146804] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.471 [2024-10-13 19:38:08.146881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:18.471 [2024-10-13 19:38:08.195179] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2873524 00:08:19.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2873524) - No such process 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:19.404 { 00:08:19.404 "params": { 00:08:19.404 "name": "Nvme$subsystem", 00:08:19.404 "trtype": "$TEST_TRANSPORT", 00:08:19.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.404 "adrfam": "ipv4", 00:08:19.404 "trsvcid": "$NVMF_PORT", 00:08:19.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.404 "hdgst": ${hdgst:-false}, 00:08:19.404 "ddgst": ${ddgst:-false} 00:08:19.404 }, 00:08:19.404 "method": "bdev_nvme_attach_controller" 00:08:19.404 } 00:08:19.404 EOF 00:08:19.404 )") 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:19.404 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:19.404 "params": { 00:08:19.404 "name": "Nvme0", 00:08:19.404 "trtype": "tcp", 00:08:19.404 "traddr": "10.0.0.2", 00:08:19.405 "adrfam": "ipv4", 00:08:19.405 "trsvcid": "4420", 00:08:19.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:19.405 "hdgst": false, 00:08:19.405 "ddgst": false 00:08:19.405 }, 00:08:19.405 "method": "bdev_nvme_attach_controller" 00:08:19.405 }' 00:08:19.663 [2024-10-13 19:38:09.223081] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:19.663 [2024-10-13 19:38:09.223223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873802 ] 00:08:19.663 [2024-10-13 19:38:09.352585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.921 [2024-10-13 19:38:09.482861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.178 Running I/O for 1 seconds... 00:08:21.549 1344.00 IOPS, 84.00 MiB/s 00:08:21.549 Latency(us) 00:08:21.549 [2024-10-13T17:38:11.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.549 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:21.549 Verification LBA range: start 0x0 length 0x400 00:08:21.549 Nvme0n1 : 1.01 1395.15 87.20 0.00 0.00 45074.84 7184.69 40583.77 00:08:21.549 [2024-10-13T17:38:11.364Z] =================================================================================================================== 00:08:21.549 [2024-10-13T17:38:11.364Z] Total : 1395.15 87.20 0.00 0.00 45074.84 7184.69 40583.77 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.114 rmmod nvme_tcp 00:08:22.114 rmmod nvme_fabrics 00:08:22.114 rmmod nvme_keyring 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2873348 ']' 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2873348 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2873348 ']' 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2873348 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2873348 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2873348' 00:08:22.114 killing process with pid 2873348 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2873348 00:08:22.114 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2873348 00:08:23.488 [2024-10-13 19:38:13.053547] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.488 19:38:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.431 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.431 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:25.431 00:08:25.431 real 0m11.625s 00:08:25.431 user 0m31.650s 00:08:25.431 sys 0m3.039s 00:08:25.431 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.431 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.431 ************************************ 00:08:25.431 END TEST nvmf_host_management 00:08:25.431 ************************************ 00:08:25.431 19:38:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:25.431 19:38:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:25.431 19:38:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.431 19:38:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.689 ************************************ 00:08:25.689 START TEST nvmf_lvol 00:08:25.689 ************************************ 00:08:25.689 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:25.689 * Looking for test storage... 00:08:25.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.689 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:25.689 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:25.689 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:25.689 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:25.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.690 --rc genhtml_branch_coverage=1 00:08:25.690 --rc genhtml_function_coverage=1 00:08:25.690 --rc genhtml_legend=1 00:08:25.690 --rc geninfo_all_blocks=1 00:08:25.690 --rc geninfo_unexecuted_blocks=1 00:08:25.690 00:08:25.690 ' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:25.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.690 --rc genhtml_branch_coverage=1 00:08:25.690 --rc genhtml_function_coverage=1 00:08:25.690 --rc genhtml_legend=1 00:08:25.690 --rc geninfo_all_blocks=1 00:08:25.690 --rc geninfo_unexecuted_blocks=1 00:08:25.690 00:08:25.690 ' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:25.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.690 --rc genhtml_branch_coverage=1 00:08:25.690 --rc genhtml_function_coverage=1 00:08:25.690 --rc genhtml_legend=1 00:08:25.690 --rc geninfo_all_blocks=1 00:08:25.690 --rc geninfo_unexecuted_blocks=1 00:08:25.690 00:08:25.690 ' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:25.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.690 --rc genhtml_branch_coverage=1 00:08:25.690 --rc genhtml_function_coverage=1 00:08:25.690 --rc genhtml_legend=1 00:08:25.690 --rc geninfo_all_blocks=1 00:08:25.690 --rc geninfo_unexecuted_blocks=1 00:08:25.690 00:08:25.690 ' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:25.690 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.691 19:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:28.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:28.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:28.222 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:28.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:28.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:08:28.223 00:08:28.223 --- 10.0.0.2 ping statistics --- 00:08:28.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.223 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:08:28.223 00:08:28.223 --- 10.0.0.1 ping statistics --- 00:08:28.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.223 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2876161 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2876161 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2876161 ']' 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.223 19:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.223 [2024-10-13 19:38:17.808008] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:28.223 [2024-10-13 19:38:17.808148] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.223 [2024-10-13 19:38:17.946982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.481 [2024-10-13 19:38:18.089024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.481 [2024-10-13 19:38:18.089106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.481 [2024-10-13 19:38:18.089132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.481 [2024-10-13 19:38:18.089157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.481 [2024-10-13 19:38:18.089177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.481 [2024-10-13 19:38:18.091863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.481 [2024-10-13 19:38:18.091922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.481 [2024-10-13 19:38:18.091926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.046 19:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.046 19:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:29.046 19:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:29.046 19:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.046 19:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:29.046 19:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.046 19:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:29.304 [2024-10-13 19:38:19.065217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.304 19:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:29.870 19:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:29.870 19:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:30.128 19:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:30.128 19:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:30.385 19:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:30.643 19:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ea3c11d0-37de-486c-80c3-4e1a6e0a6ee0 00:08:30.643 19:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ea3c11d0-37de-486c-80c3-4e1a6e0a6ee0 lvol 20 00:08:30.901 19:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=162e3102-27d0-44b4-9302-c0e7788c73fe 00:08:30.901 19:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.466 19:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 162e3102-27d0-44b4-9302-c0e7788c73fe 00:08:31.466 19:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.723 [2024-10-13 19:38:21.503692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.723 19:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.288 19:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2876722 00:08:32.288 19:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:32.288 19:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:33.222 19:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 162e3102-27d0-44b4-9302-c0e7788c73fe MY_SNAPSHOT 00:08:33.479 19:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7fe7466e-cfca-4e51-b1a4-50708c20722d 00:08:33.479 19:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 162e3102-27d0-44b4-9302-c0e7788c73fe 30 00:08:33.737 19:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7fe7466e-cfca-4e51-b1a4-50708c20722d MY_CLONE 00:08:34.303 19:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=62922ac3-24f6-4c7f-96f8-f2a3b9e131d7 00:08:34.303 19:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 62922ac3-24f6-4c7f-96f8-f2a3b9e131d7 00:08:34.868 19:38:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2876722 00:08:42.970 Initializing NVMe Controllers 00:08:42.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:42.971 Controller IO queue size 128, less than required. 00:08:42.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:42.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:42.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:42.971 Initialization complete. Launching workers. 00:08:42.971 ======================================================== 00:08:42.971 Latency(us) 00:08:42.971 Device Information : IOPS MiB/s Average min max 00:08:42.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8308.50 32.46 15415.91 379.61 141525.39 00:08:42.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8122.30 31.73 15764.15 3326.62 157365.76 00:08:42.971 ======================================================== 00:08:42.971 Total : 16430.80 64.18 15588.06 379.61 157365.76 00:08:42.971 00:08:42.971 19:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:42.971 19:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 162e3102-27d0-44b4-9302-c0e7788c73fe 00:08:42.971 19:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea3c11d0-37de-486c-80c3-4e1a6e0a6ee0 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.229 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.229 rmmod nvme_tcp 00:08:43.487 rmmod nvme_fabrics 00:08:43.487 rmmod nvme_keyring 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2876161 ']' 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2876161 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2876161 ']' 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2876161 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2876161 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2876161' 00:08:43.487 killing process with pid 2876161 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2876161 00:08:43.487 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2876161 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.861 19:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.763 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.763 00:08:46.763 real 0m21.280s 00:08:46.763 user 1m11.448s 00:08:46.763 sys 0m5.251s 00:08:46.763 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.763 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.763 ************************************ 00:08:46.763 END TEST nvmf_lvol 00:08:46.763 ************************************ 00:08:46.763 19:38:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:46.763 19:38:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:46.763 19:38:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.763 19:38:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.021 ************************************ 00:08:47.021 START TEST nvmf_lvs_grow 00:08:47.021 ************************************ 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:47.021 * Looking for test storage... 00:08:47.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:47.021 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:47.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.022 --rc genhtml_branch_coverage=1 00:08:47.022 --rc genhtml_function_coverage=1 00:08:47.022 --rc genhtml_legend=1 00:08:47.022 --rc geninfo_all_blocks=1 00:08:47.022 --rc geninfo_unexecuted_blocks=1 00:08:47.022 00:08:47.022 ' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:47.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.022 --rc genhtml_branch_coverage=1 00:08:47.022 --rc genhtml_function_coverage=1 00:08:47.022 --rc genhtml_legend=1 00:08:47.022 --rc geninfo_all_blocks=1 00:08:47.022 --rc geninfo_unexecuted_blocks=1 00:08:47.022 00:08:47.022 ' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:47.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.022 --rc genhtml_branch_coverage=1 00:08:47.022 --rc genhtml_function_coverage=1 00:08:47.022 --rc genhtml_legend=1 00:08:47.022 --rc geninfo_all_blocks=1 00:08:47.022 --rc geninfo_unexecuted_blocks=1 00:08:47.022 00:08:47.022 ' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:47.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.022 --rc genhtml_branch_coverage=1 00:08:47.022 --rc genhtml_function_coverage=1 00:08:47.022 --rc genhtml_legend=1 00:08:47.022 --rc geninfo_all_blocks=1 00:08:47.022 --rc geninfo_unexecuted_blocks=1 00:08:47.022 00:08:47.022 ' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.022 19:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.552 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.553 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.553 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.553 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.553 19:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:08:49.553 00:08:49.553 --- 10.0.0.2 ping statistics --- 00:08:49.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.553 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:08:49.553 00:08:49.553 --- 10.0.0.1 ping statistics --- 00:08:49.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.553 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2880135 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2880135 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2880135 ']' 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.553 19:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.553 [2024-10-13 19:38:39.193715] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:49.553 [2024-10-13 19:38:39.193877] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.553 [2024-10-13 19:38:39.336160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.811 [2024-10-13 19:38:39.474921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.811 [2024-10-13 19:38:39.475013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.811 [2024-10-13 19:38:39.475051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.811 [2024-10-13 19:38:39.475086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.811 [2024-10-13 19:38:39.475106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.811 [2024-10-13 19:38:39.476824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.745 [2024-10-13 19:38:40.538786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.745 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.003 ************************************ 00:08:51.003 START TEST lvs_grow_clean 00:08:51.003 ************************************ 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.003 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.261 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:51.261 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:51.519 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:08:51.519 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:08:51.519 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:51.777 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:51.777 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:51.777 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d lvol 150 00:08:52.034 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6 00:08:52.034 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:52.034 19:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.293 [2024-10-13 19:38:42.030418] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.293 [2024-10-13 19:38:42.030547] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.293 true 00:08:52.293 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:08:52.293 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.552 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:52.552 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.810 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6 00:08:53.067 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:53.325 [2024-10-13 19:38:43.122145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.325 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2880707 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2880707 /var/tmp/bdevperf.sock 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2880707 ']' 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.890 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:53.890 [2024-10-13 19:38:43.517014] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:53.890 [2024-10-13 19:38:43.517167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880707 ] 00:08:53.890 [2024-10-13 19:38:43.647219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.148 [2024-10-13 19:38:43.783289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.716 19:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.716 19:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:54.716 19:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:55.282 Nvme0n1 00:08:55.282 19:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:55.540 [ 00:08:55.540 { 00:08:55.540 "name": "Nvme0n1", 00:08:55.540 "aliases": [ 00:08:55.540 "fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6" 00:08:55.540 ], 00:08:55.540 "product_name": "NVMe disk", 00:08:55.540 "block_size": 4096, 00:08:55.540 "num_blocks": 38912, 00:08:55.540 "uuid": "fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6", 00:08:55.540 "numa_id": 0, 00:08:55.540 "assigned_rate_limits": { 00:08:55.540 "rw_ios_per_sec": 0, 00:08:55.540 "rw_mbytes_per_sec": 0, 00:08:55.540 "r_mbytes_per_sec": 0, 00:08:55.540 "w_mbytes_per_sec": 0 00:08:55.540 }, 00:08:55.540 "claimed": false, 00:08:55.540 "zoned": false, 00:08:55.540 "supported_io_types": { 00:08:55.540 "read": true, 00:08:55.540 "write": true, 00:08:55.540 "unmap": true, 00:08:55.540 "flush": true, 00:08:55.540 "reset": true, 00:08:55.540 "nvme_admin": true, 00:08:55.540 "nvme_io": true, 00:08:55.540 "nvme_io_md": false, 00:08:55.540 "write_zeroes": true, 00:08:55.540 "zcopy": false, 00:08:55.540 "get_zone_info": false, 00:08:55.540 "zone_management": false, 00:08:55.540 "zone_append": false, 00:08:55.540 "compare": true, 00:08:55.540 "compare_and_write": true, 00:08:55.540 "abort": true, 00:08:55.540 "seek_hole": false, 00:08:55.540 "seek_data": false, 00:08:55.540 "copy": true, 00:08:55.540 "nvme_iov_md": false 00:08:55.540 }, 00:08:55.540 "memory_domains": [ 00:08:55.540 { 00:08:55.540 "dma_device_id": "system", 00:08:55.540 "dma_device_type": 1 00:08:55.540 } 00:08:55.540 ], 00:08:55.540 "driver_specific": { 00:08:55.540 "nvme": [ 00:08:55.540 { 00:08:55.540 "trid": { 00:08:55.540 "trtype": "TCP", 00:08:55.540 "adrfam": "IPv4", 00:08:55.540 "traddr": "10.0.0.2", 00:08:55.540 "trsvcid": "4420", 00:08:55.540 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:55.540 }, 00:08:55.540 "ctrlr_data": { 00:08:55.540 "cntlid": 1, 00:08:55.540 "vendor_id": "0x8086", 00:08:55.540 "model_number": "SPDK bdev Controller", 00:08:55.540 "serial_number": "SPDK0", 00:08:55.540 "firmware_revision": "25.01", 00:08:55.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:55.540 "oacs": { 00:08:55.540 "security": 0, 00:08:55.540 "format": 0, 00:08:55.540 "firmware": 0, 00:08:55.540 "ns_manage": 0 00:08:55.540 }, 00:08:55.540 "multi_ctrlr": true, 00:08:55.540 "ana_reporting": false 00:08:55.540 }, 00:08:55.540 "vs": { 00:08:55.540 "nvme_version": "1.3" 00:08:55.540 }, 00:08:55.540 "ns_data": { 00:08:55.540 "id": 1, 00:08:55.540 "can_share": true 00:08:55.540 } 00:08:55.540 } 00:08:55.540 ], 00:08:55.540 "mp_policy": "active_passive" 00:08:55.540 } 00:08:55.540 } 00:08:55.540 ] 00:08:55.540 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2880968 00:08:55.540 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:55.540 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.797 Running I/O for 10 seconds... 00:08:56.731 Latency(us) 00:08:56.732 [2024-10-13T17:38:46.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.732 Nvme0n1 : 1.00 10923.00 42.67 0.00 0.00 0.00 0.00 0.00 00:08:56.732 [2024-10-13T17:38:46.547Z] =================================================================================================================== 00:08:56.732 [2024-10-13T17:38:46.547Z] Total : 10923.00 42.67 0.00 0.00 0.00 0.00 0.00 00:08:56.732 00:08:57.665 19:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:08:57.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.665 Nvme0n1 : 2.00 11082.00 43.29 0.00 0.00 0.00 0.00 0.00 00:08:57.665 [2024-10-13T17:38:47.480Z] =================================================================================================================== 00:08:57.665 [2024-10-13T17:38:47.480Z] Total : 11082.00 43.29 0.00 0.00 0.00 0.00 0.00 00:08:57.665 00:08:57.923 true 00:08:57.923 19:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:08:57.923 19:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:58.181 19:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:58.181 19:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:58.181 19:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2880968 00:08:58.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.746 Nvme0n1 : 3.00 11113.33 43.41 0.00 0.00 0.00 0.00 0.00 00:08:58.746 [2024-10-13T17:38:48.561Z] =================================================================================================================== 00:08:58.746 [2024-10-13T17:38:48.561Z] Total : 11113.33 43.41 0.00 0.00 0.00 0.00 0.00 00:08:58.746 00:08:59.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.680 Nvme0n1 : 4.00 11160.75 43.60 0.00 0.00 0.00 0.00 0.00 00:08:59.680 [2024-10-13T17:38:49.495Z] =================================================================================================================== 00:08:59.680 [2024-10-13T17:38:49.495Z] Total : 11160.75 43.60 0.00 0.00 0.00 0.00 0.00 00:08:59.680 00:09:00.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.616 Nvme0n1 : 5.00 11151.40 43.56 0.00 0.00 0.00 0.00 0.00 00:09:00.616 [2024-10-13T17:38:50.431Z] =================================================================================================================== 00:09:00.616 [2024-10-13T17:38:50.431Z] Total : 11151.40 43.56 0.00 0.00 0.00 0.00 0.00 00:09:00.616 00:09:01.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.991 Nvme0n1 : 6.00 11172.00 43.64 0.00 0.00 0.00 0.00 0.00 00:09:01.991 [2024-10-13T17:38:51.806Z] =================================================================================================================== 00:09:01.991 [2024-10-13T17:38:51.806Z] Total : 11172.00 43.64 0.00 0.00 0.00 0.00 0.00 00:09:01.991 00:09:02.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.925 Nvme0n1 : 7.00 11208.86 43.78 0.00 0.00 0.00 0.00 0.00 00:09:02.925 [2024-10-13T17:38:52.740Z] =================================================================================================================== 00:09:02.925 [2024-10-13T17:38:52.740Z] Total : 11208.86 43.78 0.00 0.00 0.00 0.00 0.00 00:09:02.925 00:09:03.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.859 Nvme0n1 : 8.00 11220.62 43.83 0.00 0.00 0.00 0.00 0.00 00:09:03.859 [2024-10-13T17:38:53.674Z] =================================================================================================================== 00:09:03.859 [2024-10-13T17:38:53.674Z] Total : 11220.62 43.83 0.00 0.00 0.00 0.00 0.00 00:09:03.859 00:09:04.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.794 Nvme0n1 : 9.00 11258.00 43.98 0.00 0.00 0.00 0.00 0.00 00:09:04.794 [2024-10-13T17:38:54.609Z] =================================================================================================================== 00:09:04.794 [2024-10-13T17:38:54.609Z] Total : 11258.00 43.98 0.00 0.00 0.00 0.00 0.00 00:09:04.794 00:09:05.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.727 Nvme0n1 : 10.00 11275.20 44.04 0.00 0.00 0.00 0.00 0.00 00:09:05.727 [2024-10-13T17:38:55.542Z] =================================================================================================================== 00:09:05.727 [2024-10-13T17:38:55.542Z] Total : 11275.20 44.04 0.00 0.00 0.00 0.00 0.00 00:09:05.727 00:09:05.727 00:09:05.727 Latency(us) 00:09:05.727 [2024-10-13T17:38:55.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.727 Nvme0n1 : 10.00 11282.96 44.07 0.00 0.00 11337.38 5485.61 22427.88 00:09:05.727 [2024-10-13T17:38:55.542Z] =================================================================================================================== 00:09:05.727 [2024-10-13T17:38:55.542Z] Total : 11282.96 44.07 0.00 0.00 11337.38 5485.61 22427.88 00:09:05.727 { 00:09:05.727 "results": [ 00:09:05.727 { 00:09:05.727 "job": "Nvme0n1", 00:09:05.727 "core_mask": "0x2", 00:09:05.727 "workload": "randwrite", 00:09:05.727 "status": "finished", 00:09:05.727 "queue_depth": 128, 00:09:05.727 "io_size": 4096, 00:09:05.727 "runtime": 10.004469, 00:09:05.727 "iops": 11282.9576462279, 00:09:05.727 "mibps": 44.074053305577735, 00:09:05.727 "io_failed": 0, 00:09:05.727 "io_timeout": 0, 00:09:05.727 "avg_latency_us": 11337.3807264614, 00:09:05.727 "min_latency_us": 5485.6059259259255, 00:09:05.727 "max_latency_us": 22427.875555555554 00:09:05.727 } 00:09:05.727 ], 00:09:05.727 "core_count": 1 00:09:05.727 } 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2880707 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2880707 ']' 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2880707 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2880707 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2880707' 00:09:05.727 killing process with pid 2880707 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2880707 00:09:05.727 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.727 00:09:05.727 Latency(us) 00:09:05.727 [2024-10-13T17:38:55.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.727 [2024-10-13T17:38:55.542Z] =================================================================================================================== 00:09:05.727 [2024-10-13T17:38:55.542Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.727 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2880707 00:09:06.690 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.977 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:07.236 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:09:07.236 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:07.494 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:07.494 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:07.494 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:07.753 [2024-10-13 19:38:57.547727] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:08.011 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:09:08.273 request: 00:09:08.273 { 00:09:08.273 "uuid": "a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d", 00:09:08.273 "method": "bdev_lvol_get_lvstores", 00:09:08.273 "req_id": 1 00:09:08.273 } 00:09:08.273 Got JSON-RPC error response 00:09:08.273 response: 00:09:08.273 { 00:09:08.273 "code": -19, 00:09:08.273 "message": "No such device" 00:09:08.273 } 00:09:08.273 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:08.273 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:08.273 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:08.273 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:08.273 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.530 aio_bdev 00:09:08.530 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6 00:09:08.530 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6 00:09:08.530 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.530 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:08.530 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.530 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.530 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.788 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6 -t 2000 00:09:09.046 [ 00:09:09.046 { 00:09:09.046 "name": "fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6", 00:09:09.046 "aliases": [ 00:09:09.046 "lvs/lvol" 00:09:09.046 ], 00:09:09.046 "product_name": "Logical Volume", 00:09:09.046 "block_size": 4096, 00:09:09.046 "num_blocks": 38912, 00:09:09.046 "uuid": "fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6", 00:09:09.046 "assigned_rate_limits": { 00:09:09.046 "rw_ios_per_sec": 0, 00:09:09.046 "rw_mbytes_per_sec": 0, 00:09:09.046 "r_mbytes_per_sec": 0, 00:09:09.046 "w_mbytes_per_sec": 0 00:09:09.046 }, 00:09:09.046 "claimed": false, 00:09:09.046 "zoned": false, 00:09:09.046 "supported_io_types": { 00:09:09.046 "read": true, 00:09:09.046 "write": true, 00:09:09.046 "unmap": true, 00:09:09.046 "flush": false, 00:09:09.046 "reset": true, 00:09:09.046 "nvme_admin": false, 00:09:09.046 "nvme_io": false, 00:09:09.046 "nvme_io_md": false, 00:09:09.046 "write_zeroes": true, 00:09:09.046 "zcopy": false, 00:09:09.046 "get_zone_info": false, 00:09:09.046 "zone_management": false, 00:09:09.046 "zone_append": false, 00:09:09.046 "compare": false, 00:09:09.046 "compare_and_write": false, 00:09:09.046 "abort": false, 00:09:09.046 "seek_hole": true, 00:09:09.046 "seek_data": true, 00:09:09.046 "copy": false, 00:09:09.046 "nvme_iov_md": false 00:09:09.046 }, 00:09:09.046 "driver_specific": { 00:09:09.046 "lvol": { 00:09:09.046 "lvol_store_uuid": "a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d", 00:09:09.046 "base_bdev": "aio_bdev", 00:09:09.046 "thin_provision": false, 00:09:09.046 "num_allocated_clusters": 38, 00:09:09.046 "snapshot": false, 00:09:09.046 "clone": false, 00:09:09.046 "esnap_clone": false 00:09:09.046 } 00:09:09.046 } 00:09:09.046 } 00:09:09.046 ] 00:09:09.046 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:09.046 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:09:09.046 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:09.305 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:09.305 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:09:09.305 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:09.563 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:09.563 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fb08ecda-f8ad-4f79-a708-d4bcdaeef5a6 00:09:09.821 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6ec3c06-b4a5-4fea-a8ec-f541e70b9b1d 00:09:10.079 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.337 00:09:10.337 real 0m19.498s 00:09:10.337 user 0m19.359s 00:09:10.337 sys 0m1.884s 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:10.337 ************************************ 00:09:10.337 END TEST lvs_grow_clean 00:09:10.337 ************************************ 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.337 ************************************ 00:09:10.337 START TEST lvs_grow_dirty 00:09:10.337 ************************************ 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.337 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.902 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:10.903 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:11.161 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:11.161 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:11.161 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:11.419 19:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:11.419 19:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:11.419 19:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 lvol 150 00:09:11.676 19:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=74d53246-f648-4184-8084-e3ec7d6926f0 00:09:11.676 19:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:11.676 19:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:11.934 [2024-10-13 19:39:01.662600] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:11.934 [2024-10-13 19:39:01.662738] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:11.934 true 00:09:11.934 19:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:11.934 19:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:12.192 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:12.192 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:12.758 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 74d53246-f648-4184-8084-e3ec7d6926f0 00:09:13.016 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:13.274 [2024-10-13 19:39:02.842533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.274 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2883149 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2883149 /var/tmp/bdevperf.sock 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2883149 ']' 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:13.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.532 19:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:13.532 [2024-10-13 19:39:03.203618] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:13.532 [2024-10-13 19:39:03.203779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883149 ] 00:09:13.532 [2024-10-13 19:39:03.341787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.790 [2024-10-13 19:39:03.483243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.723 19:39:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.723 19:39:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:14.723 19:39:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:14.981 Nvme0n1 00:09:14.981 19:39:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.238 [ 00:09:15.238 { 00:09:15.238 "name": "Nvme0n1", 00:09:15.238 "aliases": [ 00:09:15.238 "74d53246-f648-4184-8084-e3ec7d6926f0" 00:09:15.238 ], 00:09:15.238 "product_name": "NVMe disk", 00:09:15.238 "block_size": 4096, 00:09:15.238 "num_blocks": 38912, 00:09:15.238 "uuid": "74d53246-f648-4184-8084-e3ec7d6926f0", 00:09:15.238 "numa_id": 0, 00:09:15.238 "assigned_rate_limits": { 00:09:15.238 "rw_ios_per_sec": 0, 00:09:15.238 "rw_mbytes_per_sec": 0, 00:09:15.238 "r_mbytes_per_sec": 0, 00:09:15.238 "w_mbytes_per_sec": 0 00:09:15.238 }, 00:09:15.238 "claimed": false, 00:09:15.238 "zoned": false, 00:09:15.238 "supported_io_types": { 00:09:15.238 "read": true, 00:09:15.238 "write": true, 00:09:15.238 "unmap": true, 00:09:15.238 "flush": true, 00:09:15.238 "reset": true, 00:09:15.238 "nvme_admin": true, 00:09:15.238 "nvme_io": true, 00:09:15.238 "nvme_io_md": false, 00:09:15.238 "write_zeroes": true, 00:09:15.238 "zcopy": false, 00:09:15.238 "get_zone_info": false, 00:09:15.238 "zone_management": false, 00:09:15.238 "zone_append": false, 00:09:15.238 "compare": true, 00:09:15.238 "compare_and_write": true, 00:09:15.238 "abort": true, 00:09:15.238 "seek_hole": false, 00:09:15.238 "seek_data": false, 00:09:15.238 "copy": true, 00:09:15.238 "nvme_iov_md": false 00:09:15.238 }, 00:09:15.238 "memory_domains": [ 00:09:15.238 { 00:09:15.238 "dma_device_id": "system", 00:09:15.238 "dma_device_type": 1 00:09:15.238 } 00:09:15.238 ], 00:09:15.238 "driver_specific": { 00:09:15.238 "nvme": [ 00:09:15.238 { 00:09:15.238 "trid": { 00:09:15.238 "trtype": "TCP", 00:09:15.238 "adrfam": "IPv4", 00:09:15.238 "traddr": "10.0.0.2", 00:09:15.238 "trsvcid": "4420", 00:09:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:15.238 }, 00:09:15.238 "ctrlr_data": { 00:09:15.238 "cntlid": 1, 00:09:15.238 "vendor_id": "0x8086", 00:09:15.238 "model_number": "SPDK bdev Controller", 00:09:15.238 "serial_number": "SPDK0", 00:09:15.238 "firmware_revision": "25.01", 00:09:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.238 "oacs": { 00:09:15.238 "security": 0, 00:09:15.238 "format": 0, 00:09:15.238 "firmware": 0, 00:09:15.238 "ns_manage": 0 00:09:15.238 }, 00:09:15.238 "multi_ctrlr": true, 00:09:15.238 "ana_reporting": false 00:09:15.238 }, 00:09:15.238 "vs": { 00:09:15.238 "nvme_version": "1.3" 00:09:15.238 }, 00:09:15.238 "ns_data": { 00:09:15.238 "id": 1, 00:09:15.238 "can_share": true 00:09:15.238 } 00:09:15.238 } 00:09:15.238 ], 00:09:15.238 "mp_policy": "active_passive" 00:09:15.239 } 00:09:15.239 } 00:09:15.239 ] 00:09:15.239 19:39:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2883414 00:09:15.239 19:39:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.239 19:39:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:15.239 Running I/O for 10 seconds... 00:09:16.611 Latency(us) 00:09:16.611 [2024-10-13T17:39:06.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.611 Nvme0n1 : 1.00 10796.00 42.17 0.00 0.00 0.00 0.00 0.00 00:09:16.611 [2024-10-13T17:39:06.426Z] =================================================================================================================== 00:09:16.611 [2024-10-13T17:39:06.426Z] Total : 10796.00 42.17 0.00 0.00 0.00 0.00 0.00 00:09:16.611 00:09:17.177 19:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:17.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.435 Nvme0n1 : 2.00 10955.00 42.79 0.00 0.00 0.00 0.00 0.00 00:09:17.435 [2024-10-13T17:39:07.250Z] =================================================================================================================== 00:09:17.435 [2024-10-13T17:39:07.250Z] Total : 10955.00 42.79 0.00 0.00 0.00 0.00 0.00 00:09:17.435 00:09:17.435 true 00:09:17.435 19:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:17.435 19:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:18.002 19:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:18.002 19:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:18.002 19:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2883414 00:09:18.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.259 Nvme0n1 : 3.00 11008.00 43.00 0.00 0.00 0.00 0.00 0.00 00:09:18.259 [2024-10-13T17:39:08.074Z] =================================================================================================================== 00:09:18.259 [2024-10-13T17:39:08.074Z] Total : 11008.00 43.00 0.00 0.00 0.00 0.00 0.00 00:09:18.259 00:09:19.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.192 Nvme0n1 : 4.00 11050.00 43.16 0.00 0.00 0.00 0.00 0.00 00:09:19.192 [2024-10-13T17:39:09.007Z] =================================================================================================================== 00:09:19.192 [2024-10-13T17:39:09.007Z] Total : 11050.00 43.16 0.00 0.00 0.00 0.00 0.00 00:09:19.192 00:09:20.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.567 Nvme0n1 : 5.00 11100.60 43.36 0.00 0.00 0.00 0.00 0.00 00:09:20.567 [2024-10-13T17:39:10.382Z] =================================================================================================================== 00:09:20.567 [2024-10-13T17:39:10.382Z] Total : 11100.60 43.36 0.00 0.00 0.00 0.00 0.00 00:09:20.567 00:09:21.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.499 Nvme0n1 : 6.00 11081.67 43.29 0.00 0.00 0.00 0.00 0.00 00:09:21.499 [2024-10-13T17:39:11.314Z] =================================================================================================================== 00:09:21.499 [2024-10-13T17:39:11.314Z] Total : 11081.67 43.29 0.00 0.00 0.00 0.00 0.00 00:09:21.499 00:09:22.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.433 Nvme0n1 : 7.00 11131.43 43.48 0.00 0.00 0.00 0.00 0.00 00:09:22.433 [2024-10-13T17:39:12.248Z] =================================================================================================================== 00:09:22.433 [2024-10-13T17:39:12.248Z] Total : 11131.43 43.48 0.00 0.00 0.00 0.00 0.00 00:09:22.433 00:09:23.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.367 Nvme0n1 : 8.00 11161.00 43.60 0.00 0.00 0.00 0.00 0.00 00:09:23.367 [2024-10-13T17:39:13.182Z] =================================================================================================================== 00:09:23.367 [2024-10-13T17:39:13.182Z] Total : 11161.00 43.60 0.00 0.00 0.00 0.00 0.00 00:09:23.367 00:09:24.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.301 Nvme0n1 : 9.00 11198.11 43.74 0.00 0.00 0.00 0.00 0.00 00:09:24.301 [2024-10-13T17:39:14.116Z] =================================================================================================================== 00:09:24.301 [2024-10-13T17:39:14.116Z] Total : 11198.11 43.74 0.00 0.00 0.00 0.00 0.00 00:09:24.301 00:09:25.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.234 Nvme0n1 : 10.00 11215.10 43.81 0.00 0.00 0.00 0.00 0.00 00:09:25.234 [2024-10-13T17:39:15.049Z] =================================================================================================================== 00:09:25.234 [2024-10-13T17:39:15.049Z] Total : 11215.10 43.81 0.00 0.00 0.00 0.00 0.00 00:09:25.234 00:09:25.234 00:09:25.234 Latency(us) 00:09:25.234 [2024-10-13T17:39:15.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.234 Nvme0n1 : 10.01 11222.05 43.84 0.00 0.00 11399.50 2609.30 30486.38 00:09:25.234 [2024-10-13T17:39:15.049Z] =================================================================================================================== 00:09:25.234 [2024-10-13T17:39:15.049Z] Total : 11222.05 43.84 0.00 0.00 11399.50 2609.30 30486.38 00:09:25.234 { 00:09:25.234 "results": [ 00:09:25.234 { 00:09:25.234 "job": "Nvme0n1", 00:09:25.234 "core_mask": "0x2", 00:09:25.234 "workload": "randwrite", 00:09:25.234 "status": "finished", 00:09:25.234 "queue_depth": 128, 00:09:25.234 "io_size": 4096, 00:09:25.234 "runtime": 10.005215, 00:09:25.234 "iops": 11222.047702123342, 00:09:25.234 "mibps": 43.836123836419304, 00:09:25.234 "io_failed": 0, 00:09:25.234 "io_timeout": 0, 00:09:25.234 "avg_latency_us": 11399.49608312362, 00:09:25.234 "min_latency_us": 2609.303703703704, 00:09:25.234 "max_latency_us": 30486.376296296297 00:09:25.234 } 00:09:25.234 ], 00:09:25.234 "core_count": 1 00:09:25.234 } 00:09:25.234 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2883149 00:09:25.234 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2883149 ']' 00:09:25.234 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2883149 00:09:25.234 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:25.234 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.234 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2883149 00:09:25.492 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:25.493 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:25.493 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2883149' 00:09:25.493 killing process with pid 2883149 00:09:25.493 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2883149 00:09:25.493 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.493 00:09:25.493 Latency(us) 00:09:25.493 [2024-10-13T17:39:15.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.493 [2024-10-13T17:39:15.308Z] =================================================================================================================== 00:09:25.493 [2024-10-13T17:39:15.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.493 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2883149 00:09:26.427 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.684 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.942 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:26.942 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2880135 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2880135 00:09:27.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2880135 Killed "${NVMF_APP[@]}" "$@" 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2885326 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2885326 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2885326 ']' 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.200 19:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.200 [2024-10-13 19:39:16.977995] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:27.200 [2024-10-13 19:39:16.978152] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.458 [2024-10-13 19:39:17.143216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.716 [2024-10-13 19:39:17.281042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.716 [2024-10-13 19:39:17.281125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.716 [2024-10-13 19:39:17.281150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.716 [2024-10-13 19:39:17.281173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.716 [2024-10-13 19:39:17.281192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.716 [2024-10-13 19:39:17.282862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.280 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.280 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:28.280 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:28.280 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.280 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.281 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.281 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.538 [2024-10-13 19:39:18.274425] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:28.538 [2024-10-13 19:39:18.274664] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:28.538 [2024-10-13 19:39:18.274755] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:28.538 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:28.538 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 74d53246-f648-4184-8084-e3ec7d6926f0 00:09:28.538 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=74d53246-f648-4184-8084-e3ec7d6926f0 00:09:28.538 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.538 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:28.538 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.538 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.538 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:28.796 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 74d53246-f648-4184-8084-e3ec7d6926f0 -t 2000 00:09:29.054 [ 00:09:29.054 { 00:09:29.054 "name": "74d53246-f648-4184-8084-e3ec7d6926f0", 00:09:29.054 "aliases": [ 00:09:29.054 "lvs/lvol" 00:09:29.054 ], 00:09:29.054 "product_name": "Logical Volume", 00:09:29.054 "block_size": 4096, 00:09:29.054 "num_blocks": 38912, 00:09:29.054 "uuid": "74d53246-f648-4184-8084-e3ec7d6926f0", 00:09:29.054 "assigned_rate_limits": { 00:09:29.054 "rw_ios_per_sec": 0, 00:09:29.054 "rw_mbytes_per_sec": 0, 00:09:29.054 "r_mbytes_per_sec": 0, 00:09:29.054 "w_mbytes_per_sec": 0 00:09:29.054 }, 00:09:29.054 "claimed": false, 00:09:29.054 "zoned": false, 00:09:29.054 "supported_io_types": { 00:09:29.054 "read": true, 00:09:29.054 "write": true, 00:09:29.054 "unmap": true, 00:09:29.054 "flush": false, 00:09:29.054 "reset": true, 00:09:29.054 "nvme_admin": false, 00:09:29.054 "nvme_io": false, 00:09:29.054 "nvme_io_md": false, 00:09:29.054 "write_zeroes": true, 00:09:29.054 "zcopy": false, 00:09:29.054 "get_zone_info": false, 00:09:29.054 "zone_management": false, 00:09:29.054 "zone_append": false, 00:09:29.054 "compare": false, 00:09:29.054 "compare_and_write": false, 00:09:29.054 "abort": false, 00:09:29.054 "seek_hole": true, 00:09:29.054 "seek_data": true, 00:09:29.054 "copy": false, 00:09:29.054 "nvme_iov_md": false 00:09:29.054 }, 00:09:29.054 "driver_specific": { 00:09:29.054 "lvol": { 00:09:29.054 "lvol_store_uuid": "8f8e8761-7318-4a40-b9cd-460da9bc9397", 00:09:29.054 "base_bdev": "aio_bdev", 00:09:29.054 "thin_provision": false, 00:09:29.054 "num_allocated_clusters": 38, 00:09:29.054 "snapshot": false, 00:09:29.054 "clone": false, 00:09:29.054 "esnap_clone": false 00:09:29.054 } 00:09:29.054 } 00:09:29.054 } 00:09:29.054 ] 00:09:29.054 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:29.054 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:29.054 19:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:29.620 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:29.620 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:29.620 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:29.620 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:29.620 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.886 [2024-10-13 19:39:19.663286] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:29.886 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:29.886 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:29.886 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:29.886 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.886 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.886 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.205 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.205 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.205 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.205 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.205 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:30.205 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:30.489 request: 00:09:30.489 { 00:09:30.489 "uuid": "8f8e8761-7318-4a40-b9cd-460da9bc9397", 00:09:30.489 "method": "bdev_lvol_get_lvstores", 00:09:30.489 "req_id": 1 00:09:30.489 } 00:09:30.489 Got JSON-RPC error response 00:09:30.489 response: 00:09:30.489 { 00:09:30.489 "code": -19, 00:09:30.489 "message": "No such device" 00:09:30.489 } 00:09:30.489 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:30.489 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:30.489 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:30.489 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:30.489 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:30.747 aio_bdev 00:09:30.747 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 74d53246-f648-4184-8084-e3ec7d6926f0 00:09:30.747 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=74d53246-f648-4184-8084-e3ec7d6926f0 00:09:30.747 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.747 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:30.747 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.747 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.747 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:31.005 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 74d53246-f648-4184-8084-e3ec7d6926f0 -t 2000 00:09:31.264 [ 00:09:31.264 { 00:09:31.264 "name": "74d53246-f648-4184-8084-e3ec7d6926f0", 00:09:31.264 "aliases": [ 00:09:31.264 "lvs/lvol" 00:09:31.264 ], 00:09:31.264 "product_name": "Logical Volume", 00:09:31.264 "block_size": 4096, 00:09:31.264 "num_blocks": 38912, 00:09:31.264 "uuid": "74d53246-f648-4184-8084-e3ec7d6926f0", 00:09:31.264 "assigned_rate_limits": { 00:09:31.264 "rw_ios_per_sec": 0, 00:09:31.264 "rw_mbytes_per_sec": 0, 00:09:31.264 "r_mbytes_per_sec": 0, 00:09:31.264 "w_mbytes_per_sec": 0 00:09:31.264 }, 00:09:31.264 "claimed": false, 00:09:31.264 "zoned": false, 00:09:31.264 "supported_io_types": { 00:09:31.264 "read": true, 00:09:31.264 "write": true, 00:09:31.264 "unmap": true, 00:09:31.264 "flush": false, 00:09:31.264 "reset": true, 00:09:31.264 "nvme_admin": false, 00:09:31.264 "nvme_io": false, 00:09:31.264 "nvme_io_md": false, 00:09:31.264 "write_zeroes": true, 00:09:31.264 "zcopy": false, 00:09:31.264 "get_zone_info": false, 00:09:31.264 "zone_management": false, 00:09:31.264 "zone_append": false, 00:09:31.264 "compare": false, 00:09:31.264 "compare_and_write": false, 00:09:31.264 "abort": false, 00:09:31.264 "seek_hole": true, 00:09:31.264 "seek_data": true, 00:09:31.264 "copy": false, 00:09:31.264 "nvme_iov_md": false 00:09:31.264 }, 00:09:31.264 "driver_specific": { 00:09:31.264 "lvol": { 00:09:31.264 "lvol_store_uuid": "8f8e8761-7318-4a40-b9cd-460da9bc9397", 00:09:31.264 "base_bdev": "aio_bdev", 00:09:31.264 "thin_provision": false, 00:09:31.264 "num_allocated_clusters": 38, 00:09:31.264 "snapshot": false, 00:09:31.264 "clone": false, 00:09:31.264 "esnap_clone": false 00:09:31.264 } 00:09:31.264 } 00:09:31.264 } 00:09:31.264 ] 00:09:31.264 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:31.264 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:31.264 19:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:31.522 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:31.522 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:31.522 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:31.780 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:31.780 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 74d53246-f648-4184-8084-e3ec7d6926f0 00:09:32.038 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f8e8761-7318-4a40-b9cd-460da9bc9397 00:09:32.295 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:32.553 00:09:32.553 real 0m22.180s 00:09:32.553 user 0m56.189s 00:09:32.553 sys 0m4.621s 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.553 ************************************ 00:09:32.553 END TEST lvs_grow_dirty 00:09:32.553 ************************************ 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:32.553 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:32.554 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:32.554 nvmf_trace.0 00:09:32.811 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:32.811 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:32.811 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:32.811 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:32.811 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.811 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:32.811 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.812 rmmod nvme_tcp 00:09:32.812 rmmod nvme_fabrics 00:09:32.812 rmmod nvme_keyring 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2885326 ']' 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2885326 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2885326 ']' 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2885326 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2885326 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2885326' 00:09:32.812 killing process with pid 2885326 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2885326 00:09:32.812 19:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2885326 00:09:34.185 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:34.185 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.186 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.089 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.089 00:09:36.089 real 0m49.082s 00:09:36.089 user 1m23.713s 00:09:36.090 sys 0m8.692s 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:36.090 ************************************ 00:09:36.090 END TEST nvmf_lvs_grow 00:09:36.090 ************************************ 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.090 ************************************ 00:09:36.090 START TEST nvmf_bdev_io_wait 00:09:36.090 ************************************ 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:36.090 * Looking for test storage... 00:09:36.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:36.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.090 --rc genhtml_branch_coverage=1 00:09:36.090 --rc genhtml_function_coverage=1 00:09:36.090 --rc genhtml_legend=1 00:09:36.090 --rc geninfo_all_blocks=1 00:09:36.090 --rc geninfo_unexecuted_blocks=1 00:09:36.090 00:09:36.090 ' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:36.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.090 --rc genhtml_branch_coverage=1 00:09:36.090 --rc genhtml_function_coverage=1 00:09:36.090 --rc genhtml_legend=1 00:09:36.090 --rc geninfo_all_blocks=1 00:09:36.090 --rc geninfo_unexecuted_blocks=1 00:09:36.090 00:09:36.090 ' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:36.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.090 --rc genhtml_branch_coverage=1 00:09:36.090 --rc genhtml_function_coverage=1 00:09:36.090 --rc genhtml_legend=1 00:09:36.090 --rc geninfo_all_blocks=1 00:09:36.090 --rc geninfo_unexecuted_blocks=1 00:09:36.090 00:09:36.090 ' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:36.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.090 --rc genhtml_branch_coverage=1 00:09:36.090 --rc genhtml_function_coverage=1 00:09:36.090 --rc genhtml_legend=1 00:09:36.090 --rc geninfo_all_blocks=1 00:09:36.090 --rc geninfo_unexecuted_blocks=1 00:09:36.090 00:09:36.090 ' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.090 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:36.091 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:36.349 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.349 19:39:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.264 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.264 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:38.264 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:38.264 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:38.264 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:38.265 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:38.265 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:38.265 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:38.265 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.265 19:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:38.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:09:38.265 00:09:38.265 --- 10.0.0.2 ping statistics --- 00:09:38.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.265 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:09:38.265 00:09:38.265 --- 10.0.0.1 ping statistics --- 00:09:38.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.265 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:38.265 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2888071 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2888071 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2888071 ']' 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.523 19:39:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.523 [2024-10-13 19:39:28.200642] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:38.523 [2024-10-13 19:39:28.200807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.780 [2024-10-13 19:39:28.346296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.780 [2024-10-13 19:39:28.489013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.780 [2024-10-13 19:39:28.489103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.780 [2024-10-13 19:39:28.489129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.780 [2024-10-13 19:39:28.489154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.780 [2024-10-13 19:39:28.489175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.780 [2024-10-13 19:39:28.492050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.780 [2024-10-13 19:39:28.492119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.780 [2024-10-13 19:39:28.492213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.780 [2024-10-13 19:39:28.492219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.711 [2024-10-13 19:39:29.431346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.711 Malloc0 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.711 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.969 [2024-10-13 19:39:29.538033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2888345 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2888347 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:39.969 { 00:09:39.969 "params": { 00:09:39.969 "name": "Nvme$subsystem", 00:09:39.969 "trtype": "$TEST_TRANSPORT", 00:09:39.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.969 "adrfam": "ipv4", 00:09:39.969 "trsvcid": "$NVMF_PORT", 00:09:39.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.969 "hdgst": ${hdgst:-false}, 00:09:39.969 "ddgst": ${ddgst:-false} 00:09:39.969 }, 00:09:39.969 "method": "bdev_nvme_attach_controller" 00:09:39.969 } 00:09:39.969 EOF 00:09:39.969 )") 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2888349 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:39.969 { 00:09:39.969 "params": { 00:09:39.969 "name": "Nvme$subsystem", 00:09:39.969 "trtype": "$TEST_TRANSPORT", 00:09:39.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.969 "adrfam": "ipv4", 00:09:39.969 "trsvcid": "$NVMF_PORT", 00:09:39.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.969 "hdgst": ${hdgst:-false}, 00:09:39.969 "ddgst": ${ddgst:-false} 00:09:39.969 }, 00:09:39.969 "method": "bdev_nvme_attach_controller" 00:09:39.969 } 00:09:39.969 EOF 00:09:39.969 )") 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2888352 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:39.969 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:39.969 { 00:09:39.969 "params": { 00:09:39.970 "name": "Nvme$subsystem", 00:09:39.970 "trtype": "$TEST_TRANSPORT", 00:09:39.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.970 "adrfam": "ipv4", 00:09:39.970 "trsvcid": "$NVMF_PORT", 00:09:39.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.970 "hdgst": ${hdgst:-false}, 00:09:39.970 "ddgst": ${ddgst:-false} 00:09:39.970 }, 00:09:39.970 "method": "bdev_nvme_attach_controller" 00:09:39.970 } 00:09:39.970 EOF 00:09:39.970 )") 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:39.970 { 00:09:39.970 "params": { 00:09:39.970 "name": "Nvme$subsystem", 00:09:39.970 "trtype": "$TEST_TRANSPORT", 00:09:39.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.970 "adrfam": "ipv4", 00:09:39.970 "trsvcid": "$NVMF_PORT", 00:09:39.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.970 "hdgst": ${hdgst:-false}, 00:09:39.970 "ddgst": ${ddgst:-false} 00:09:39.970 }, 00:09:39.970 "method": "bdev_nvme_attach_controller" 00:09:39.970 } 00:09:39.970 EOF 00:09:39.970 )") 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2888345 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:39.970 "params": { 00:09:39.970 "name": "Nvme1", 00:09:39.970 "trtype": "tcp", 00:09:39.970 "traddr": "10.0.0.2", 00:09:39.970 "adrfam": "ipv4", 00:09:39.970 "trsvcid": "4420", 00:09:39.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.970 "hdgst": false, 00:09:39.970 "ddgst": false 00:09:39.970 }, 00:09:39.970 "method": "bdev_nvme_attach_controller" 00:09:39.970 }' 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:39.970 "params": { 00:09:39.970 "name": "Nvme1", 00:09:39.970 "trtype": "tcp", 00:09:39.970 "traddr": "10.0.0.2", 00:09:39.970 "adrfam": "ipv4", 00:09:39.970 "trsvcid": "4420", 00:09:39.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.970 "hdgst": false, 00:09:39.970 "ddgst": false 00:09:39.970 }, 00:09:39.970 "method": "bdev_nvme_attach_controller" 00:09:39.970 }' 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:39.970 "params": { 00:09:39.970 "name": "Nvme1", 00:09:39.970 "trtype": "tcp", 00:09:39.970 "traddr": "10.0.0.2", 00:09:39.970 "adrfam": "ipv4", 00:09:39.970 "trsvcid": "4420", 00:09:39.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.970 "hdgst": false, 00:09:39.970 "ddgst": false 00:09:39.970 }, 00:09:39.970 "method": "bdev_nvme_attach_controller" 00:09:39.970 }' 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:39.970 19:39:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:39.970 "params": { 00:09:39.970 "name": "Nvme1", 00:09:39.970 "trtype": "tcp", 00:09:39.970 "traddr": "10.0.0.2", 00:09:39.970 "adrfam": "ipv4", 00:09:39.970 "trsvcid": "4420", 00:09:39.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.970 "hdgst": false, 00:09:39.970 "ddgst": false 00:09:39.970 }, 00:09:39.970 "method": "bdev_nvme_attach_controller" 00:09:39.970 }' 00:09:39.970 [2024-10-13 19:39:29.626287] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:39.970 [2024-10-13 19:39:29.626287] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:39.970 [2024-10-13 19:39:29.626467] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-13 19:39:29.626467] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:39.970 --proc-type=auto ] 00:09:39.970 [2024-10-13 19:39:29.628914] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:39.970 [2024-10-13 19:39:29.628914] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:39.970 [2024-10-13 19:39:29.629058] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-13 19:39:29.629058] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:39.970 --proc-type=auto ] 00:09:40.228 [2024-10-13 19:39:29.868285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.228 [2024-10-13 19:39:29.968460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.228 [2024-10-13 19:39:29.992264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:40.485 [2024-10-13 19:39:30.046803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.485 [2024-10-13 19:39:30.097434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:40.485 [2024-10-13 19:39:30.127208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.485 [2024-10-13 19:39:30.164318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:40.485 [2024-10-13 19:39:30.243241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:40.743 Running I/O for 1 seconds... 00:09:40.743 Running I/O for 1 seconds... 00:09:40.743 Running I/O for 1 seconds... 00:09:41.001 Running I/O for 1 seconds... 00:09:41.936 4580.00 IOPS, 17.89 MiB/s 00:09:41.937 Latency(us) 00:09:41.937 [2024-10-13T17:39:31.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.937 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:41.937 Nvme1n1 : 1.03 4597.52 17.96 0.00 0.00 27542.55 7427.41 41748.86 00:09:41.937 [2024-10-13T17:39:31.752Z] =================================================================================================================== 00:09:41.937 [2024-10-13T17:39:31.752Z] Total : 4597.52 17.96 0.00 0.00 27542.55 7427.41 41748.86 00:09:41.937 6102.00 IOPS, 23.84 MiB/s 00:09:41.937 Latency(us) 00:09:41.937 [2024-10-13T17:39:31.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.937 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:41.937 Nvme1n1 : 1.01 6163.94 24.08 0.00 0.00 20634.98 5534.15 33204.91 00:09:41.937 [2024-10-13T17:39:31.752Z] =================================================================================================================== 00:09:41.937 [2024-10-13T17:39:31.752Z] Total : 6163.94 24.08 0.00 0.00 20634.98 5534.15 33204.91 00:09:41.937 4534.00 IOPS, 17.71 MiB/s 00:09:41.937 Latency(us) 00:09:41.937 [2024-10-13T17:39:31.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.937 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:41.937 Nvme1n1 : 1.01 4665.63 18.23 0.00 0.00 27326.92 6505.05 51263.72 00:09:41.937 [2024-10-13T17:39:31.752Z] =================================================================================================================== 00:09:41.937 [2024-10-13T17:39:31.752Z] Total : 4665.63 18.23 0.00 0.00 27326.92 6505.05 51263.72 00:09:41.937 158600.00 IOPS, 619.53 MiB/s 00:09:41.937 Latency(us) 00:09:41.937 [2024-10-13T17:39:31.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.937 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:41.937 Nvme1n1 : 1.00 158271.99 618.25 0.00 0.00 804.59 385.33 2014.63 00:09:41.937 [2024-10-13T17:39:31.752Z] =================================================================================================================== 00:09:41.937 [2024-10-13T17:39:31.752Z] Total : 158271.99 618.25 0.00 0.00 804.59 385.33 2014.63 00:09:42.503 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2888347 00:09:42.503 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2888349 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2888352 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.761 rmmod nvme_tcp 00:09:42.761 rmmod nvme_fabrics 00:09:42.761 rmmod nvme_keyring 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2888071 ']' 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2888071 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2888071 ']' 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2888071 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2888071 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2888071' 00:09:42.761 killing process with pid 2888071 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2888071 00:09:42.761 19:39:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2888071 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.136 19:39:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.042 00:09:46.042 real 0m9.831s 00:09:46.042 user 0m27.599s 00:09:46.042 sys 0m3.920s 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.042 ************************************ 00:09:46.042 END TEST nvmf_bdev_io_wait 00:09:46.042 ************************************ 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.042 ************************************ 00:09:46.042 START TEST nvmf_queue_depth 00:09:46.042 ************************************ 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:46.042 * Looking for test storage... 00:09:46.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.042 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.042 --rc genhtml_branch_coverage=1 00:09:46.042 --rc genhtml_function_coverage=1 00:09:46.042 --rc genhtml_legend=1 00:09:46.043 --rc geninfo_all_blocks=1 00:09:46.043 --rc geninfo_unexecuted_blocks=1 00:09:46.043 00:09:46.043 ' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.043 --rc genhtml_branch_coverage=1 00:09:46.043 --rc genhtml_function_coverage=1 00:09:46.043 --rc genhtml_legend=1 00:09:46.043 --rc geninfo_all_blocks=1 00:09:46.043 --rc geninfo_unexecuted_blocks=1 00:09:46.043 00:09:46.043 ' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.043 --rc genhtml_branch_coverage=1 00:09:46.043 --rc genhtml_function_coverage=1 00:09:46.043 --rc genhtml_legend=1 00:09:46.043 --rc geninfo_all_blocks=1 00:09:46.043 --rc geninfo_unexecuted_blocks=1 00:09:46.043 00:09:46.043 ' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.043 --rc genhtml_branch_coverage=1 00:09:46.043 --rc genhtml_function_coverage=1 00:09:46.043 --rc genhtml_legend=1 00:09:46.043 --rc geninfo_all_blocks=1 00:09:46.043 --rc geninfo_unexecuted_blocks=1 00:09:46.043 00:09:46.043 ' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.043 19:39:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:48.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.574 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:48.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:48.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:48.575 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.575 19:39:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:48.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:09:48.575 00:09:48.575 --- 10.0.0.2 ping statistics --- 00:09:48.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.575 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:09:48.575 00:09:48.575 --- 10.0.0.1 ping statistics --- 00:09:48.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.575 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2890734 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2890734 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2890734 ']' 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.575 19:39:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.575 [2024-10-13 19:39:38.193376] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:48.575 [2024-10-13 19:39:38.193538] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.575 [2024-10-13 19:39:38.330652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.834 [2024-10-13 19:39:38.461888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.834 [2024-10-13 19:39:38.461984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.834 [2024-10-13 19:39:38.462010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.834 [2024-10-13 19:39:38.462034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.834 [2024-10-13 19:39:38.462058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.834 [2024-10-13 19:39:38.463709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.400 [2024-10-13 19:39:39.180241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.400 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.659 Malloc0 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.659 [2024-10-13 19:39:39.296772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2890886 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2890886 /var/tmp/bdevperf.sock 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2890886 ']' 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:49.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.659 19:39:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.659 [2024-10-13 19:39:39.383270] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:49.659 [2024-10-13 19:39:39.383428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890886 ] 00:09:49.917 [2024-10-13 19:39:39.513964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.917 [2024-10-13 19:39:39.648526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.851 19:39:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.851 19:39:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:50.851 19:39:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:50.851 19:39:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.851 19:39:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.851 NVMe0n1 00:09:50.851 19:39:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.851 19:39:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.851 Running I/O for 10 seconds... 00:09:53.161 5689.00 IOPS, 22.22 MiB/s [2024-10-13T17:39:43.910Z] 5835.50 IOPS, 22.79 MiB/s [2024-10-13T17:39:44.845Z] 5976.00 IOPS, 23.34 MiB/s [2024-10-13T17:39:45.779Z] 6015.75 IOPS, 23.50 MiB/s [2024-10-13T17:39:46.713Z] 6048.60 IOPS, 23.63 MiB/s [2024-10-13T17:39:48.087Z] 6072.00 IOPS, 23.72 MiB/s [2024-10-13T17:39:49.019Z] 6083.43 IOPS, 23.76 MiB/s [2024-10-13T17:39:49.953Z] 6105.50 IOPS, 23.85 MiB/s [2024-10-13T17:39:50.888Z] 6103.33 IOPS, 23.84 MiB/s [2024-10-13T17:39:50.888Z] 6109.80 IOPS, 23.87 MiB/s 00:10:01.073 Latency(us) 00:10:01.073 [2024-10-13T17:39:50.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.073 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:01.073 Verification LBA range: start 0x0 length 0x4000 00:10:01.073 NVMe0n1 : 10.13 6127.76 23.94 0.00 0.00 165990.75 28738.75 99032.18 00:10:01.073 [2024-10-13T17:39:50.888Z] =================================================================================================================== 00:10:01.073 [2024-10-13T17:39:50.888Z] Total : 6127.76 23.94 0.00 0.00 165990.75 28738.75 99032.18 00:10:01.073 { 00:10:01.073 "results": [ 00:10:01.073 { 00:10:01.073 "job": "NVMe0n1", 00:10:01.073 "core_mask": "0x1", 00:10:01.073 "workload": "verify", 00:10:01.073 "status": "finished", 00:10:01.073 "verify_range": { 00:10:01.073 "start": 0, 00:10:01.073 "length": 16384 00:10:01.073 }, 00:10:01.073 "queue_depth": 1024, 00:10:01.073 "io_size": 4096, 00:10:01.073 "runtime": 10.127839, 00:10:01.073 "iops": 6127.76328691639, 00:10:01.073 "mibps": 23.936575339517148, 00:10:01.073 "io_failed": 0, 00:10:01.073 "io_timeout": 0, 00:10:01.073 "avg_latency_us": 165990.75249887357, 00:10:01.073 "min_latency_us": 28738.74962962963, 00:10:01.073 "max_latency_us": 99032.17777777778 00:10:01.073 } 00:10:01.073 ], 00:10:01.073 "core_count": 1 00:10:01.073 } 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2890886 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2890886 ']' 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2890886 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2890886 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2890886' 00:10:01.073 killing process with pid 2890886 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2890886 00:10:01.073 Received shutdown signal, test time was about 10.000000 seconds 00:10:01.073 00:10:01.073 Latency(us) 00:10:01.073 [2024-10-13T17:39:50.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.073 [2024-10-13T17:39:50.888Z] =================================================================================================================== 00:10:01.073 [2024-10-13T17:39:50.888Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:01.073 19:39:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2890886 00:10:02.019 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:02.019 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:02.019 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:02.019 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:02.019 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.019 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:02.019 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.019 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.019 rmmod nvme_tcp 00:10:02.019 rmmod nvme_fabrics 00:10:02.019 rmmod nvme_keyring 00:10:02.278 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.278 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:02.278 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:02.278 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2890734 ']' 00:10:02.278 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2890734 00:10:02.278 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2890734 ']' 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2890734 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2890734 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2890734' 00:10:02.279 killing process with pid 2890734 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2890734 00:10:02.279 19:39:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2890734 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.654 19:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.558 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.558 00:10:05.558 real 0m19.669s 00:10:05.558 user 0m27.999s 00:10:05.558 sys 0m3.246s 00:10:05.558 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.558 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.558 ************************************ 00:10:05.558 END TEST nvmf_queue_depth 00:10:05.558 ************************************ 00:10:05.558 19:39:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:05.558 19:39:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.558 19:39:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.558 19:39:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.558 ************************************ 00:10:05.558 START TEST nvmf_target_multipath 00:10:05.558 ************************************ 00:10:05.558 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:05.817 * Looking for test storage... 00:10:05.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.817 --rc genhtml_branch_coverage=1 00:10:05.817 --rc genhtml_function_coverage=1 00:10:05.817 --rc genhtml_legend=1 00:10:05.817 --rc geninfo_all_blocks=1 00:10:05.817 --rc geninfo_unexecuted_blocks=1 00:10:05.817 00:10:05.817 ' 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.817 --rc genhtml_branch_coverage=1 00:10:05.817 --rc genhtml_function_coverage=1 00:10:05.817 --rc genhtml_legend=1 00:10:05.817 --rc geninfo_all_blocks=1 00:10:05.817 --rc geninfo_unexecuted_blocks=1 00:10:05.817 00:10:05.817 ' 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.817 --rc genhtml_branch_coverage=1 00:10:05.817 --rc genhtml_function_coverage=1 00:10:05.817 --rc genhtml_legend=1 00:10:05.817 --rc geninfo_all_blocks=1 00:10:05.817 --rc geninfo_unexecuted_blocks=1 00:10:05.817 00:10:05.817 ' 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.817 --rc genhtml_branch_coverage=1 00:10:05.817 --rc genhtml_function_coverage=1 00:10:05.817 --rc genhtml_legend=1 00:10:05.817 --rc geninfo_all_blocks=1 00:10:05.817 --rc geninfo_unexecuted_blocks=1 00:10:05.817 00:10:05.817 ' 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.817 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.818 19:39:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:07.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:07.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:07.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:07.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.748 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.749 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:10:08.032 00:10:08.032 --- 10.0.0.2 ping statistics --- 00:10:08.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.032 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:10:08.032 00:10:08.032 --- 10.0.0.1 ping statistics --- 00:10:08.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.032 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:08.032 only one NIC for nvmf test 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.032 rmmod nvme_tcp 00:10:08.032 rmmod nvme_fabrics 00:10:08.032 rmmod nvme_keyring 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.032 19:39:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.941 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.200 00:10:10.200 real 0m4.424s 00:10:10.200 user 0m0.843s 00:10:10.200 sys 0m1.583s 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.200 ************************************ 00:10:10.200 END TEST nvmf_target_multipath 00:10:10.200 ************************************ 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.200 ************************************ 00:10:10.200 START TEST nvmf_zcopy 00:10:10.200 ************************************ 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:10.200 * Looking for test storage... 00:10:10.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.200 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:10.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.201 --rc genhtml_branch_coverage=1 00:10:10.201 --rc genhtml_function_coverage=1 00:10:10.201 --rc genhtml_legend=1 00:10:10.201 --rc geninfo_all_blocks=1 00:10:10.201 --rc geninfo_unexecuted_blocks=1 00:10:10.201 00:10:10.201 ' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:10.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.201 --rc genhtml_branch_coverage=1 00:10:10.201 --rc genhtml_function_coverage=1 00:10:10.201 --rc genhtml_legend=1 00:10:10.201 --rc geninfo_all_blocks=1 00:10:10.201 --rc geninfo_unexecuted_blocks=1 00:10:10.201 00:10:10.201 ' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:10.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.201 --rc genhtml_branch_coverage=1 00:10:10.201 --rc genhtml_function_coverage=1 00:10:10.201 --rc genhtml_legend=1 00:10:10.201 --rc geninfo_all_blocks=1 00:10:10.201 --rc geninfo_unexecuted_blocks=1 00:10:10.201 00:10:10.201 ' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:10.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.201 --rc genhtml_branch_coverage=1 00:10:10.201 --rc genhtml_function_coverage=1 00:10:10.201 --rc genhtml_legend=1 00:10:10.201 --rc geninfo_all_blocks=1 00:10:10.201 --rc geninfo_unexecuted_blocks=1 00:10:10.201 00:10:10.201 ' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.201 19:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:12.730 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:12.730 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:12.730 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:12.730 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.730 19:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:10:12.730 00:10:12.730 --- 10.0.0.2 ping statistics --- 00:10:12.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.730 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:10:12.730 00:10:12.730 --- 10.0.0.1 ping statistics --- 00:10:12.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.730 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.730 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2896367 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2896367 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2896367 ']' 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.731 19:40:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.731 [2024-10-13 19:40:02.212819] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:10:12.731 [2024-10-13 19:40:02.212965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.731 [2024-10-13 19:40:02.357696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.731 [2024-10-13 19:40:02.496883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.731 [2024-10-13 19:40:02.496990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.731 [2024-10-13 19:40:02.497016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.731 [2024-10-13 19:40:02.497041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.731 [2024-10-13 19:40:02.497061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.731 [2024-10-13 19:40:02.498761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.665 [2024-10-13 19:40:03.200230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.665 [2024-10-13 19:40:03.216489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.665 malloc0 00:10:13.665 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:13.666 { 00:10:13.666 "params": { 00:10:13.666 "name": "Nvme$subsystem", 00:10:13.666 "trtype": "$TEST_TRANSPORT", 00:10:13.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.666 "adrfam": "ipv4", 00:10:13.666 "trsvcid": "$NVMF_PORT", 00:10:13.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.666 "hdgst": ${hdgst:-false}, 00:10:13.666 "ddgst": ${ddgst:-false} 00:10:13.666 }, 00:10:13.666 "method": "bdev_nvme_attach_controller" 00:10:13.666 } 00:10:13.666 EOF 00:10:13.666 )") 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:13.666 19:40:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:13.666 "params": { 00:10:13.666 "name": "Nvme1", 00:10:13.666 "trtype": "tcp", 00:10:13.666 "traddr": "10.0.0.2", 00:10:13.666 "adrfam": "ipv4", 00:10:13.666 "trsvcid": "4420", 00:10:13.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.666 "hdgst": false, 00:10:13.666 "ddgst": false 00:10:13.666 }, 00:10:13.666 "method": "bdev_nvme_attach_controller" 00:10:13.666 }' 00:10:13.666 [2024-10-13 19:40:03.377298] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:10:13.666 [2024-10-13 19:40:03.377461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896526 ] 00:10:13.924 [2024-10-13 19:40:03.518896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.924 [2024-10-13 19:40:03.660221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.490 Running I/O for 10 seconds... 00:10:16.356 4211.00 IOPS, 32.90 MiB/s [2024-10-13T17:40:07.546Z] 4241.00 IOPS, 33.13 MiB/s [2024-10-13T17:40:08.484Z] 4251.33 IOPS, 33.21 MiB/s [2024-10-13T17:40:09.417Z] 4266.00 IOPS, 33.33 MiB/s [2024-10-13T17:40:10.351Z] 4271.60 IOPS, 33.37 MiB/s [2024-10-13T17:40:11.285Z] 4271.67 IOPS, 33.37 MiB/s [2024-10-13T17:40:12.219Z] 4271.14 IOPS, 33.37 MiB/s [2024-10-13T17:40:13.594Z] 4271.12 IOPS, 33.37 MiB/s [2024-10-13T17:40:14.160Z] 4276.89 IOPS, 33.41 MiB/s [2024-10-13T17:40:14.418Z] 4277.40 IOPS, 33.42 MiB/s 00:10:24.603 Latency(us) 00:10:24.603 [2024-10-13T17:40:14.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.603 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:24.603 Verification LBA range: start 0x0 length 0x1000 00:10:24.603 Nvme1n1 : 10.02 4276.74 33.41 0.00 0.00 29846.87 703.91 42331.40 00:10:24.603 [2024-10-13T17:40:14.418Z] =================================================================================================================== 00:10:24.603 [2024-10-13T17:40:14.418Z] Total : 4276.74 33.41 0.00 0.00 29846.87 703.91 42331.40 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2897967 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:25.538 { 00:10:25.538 "params": { 00:10:25.538 "name": "Nvme$subsystem", 00:10:25.538 "trtype": "$TEST_TRANSPORT", 00:10:25.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.538 "adrfam": "ipv4", 00:10:25.538 "trsvcid": "$NVMF_PORT", 00:10:25.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.538 "hdgst": ${hdgst:-false}, 00:10:25.538 "ddgst": ${ddgst:-false} 00:10:25.538 }, 00:10:25.538 "method": "bdev_nvme_attach_controller" 00:10:25.538 } 00:10:25.538 EOF 00:10:25.538 )") 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:25.538 [2024-10-13 19:40:15.109905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.109970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:25.538 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:25.538 "params": { 00:10:25.538 "name": "Nvme1", 00:10:25.538 "trtype": "tcp", 00:10:25.538 "traddr": "10.0.0.2", 00:10:25.538 "adrfam": "ipv4", 00:10:25.538 "trsvcid": "4420", 00:10:25.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.538 "hdgst": false, 00:10:25.538 "ddgst": false 00:10:25.538 }, 00:10:25.538 "method": "bdev_nvme_attach_controller" 00:10:25.538 }' 00:10:25.538 [2024-10-13 19:40:15.117821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.117858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 [2024-10-13 19:40:15.125807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.125842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 [2024-10-13 19:40:15.133846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.133882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 [2024-10-13 19:40:15.141876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.141910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 [2024-10-13 19:40:15.149895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.149931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 [2024-10-13 19:40:15.157914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.157949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 [2024-10-13 19:40:15.165936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.165970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 [2024-10-13 19:40:15.173936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.538 [2024-10-13 19:40:15.173969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.538 [2024-10-13 19:40:15.181985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.182018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.187355] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:10:25.539 [2024-10-13 19:40:15.187515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897967 ] 00:10:25.539 [2024-10-13 19:40:15.189989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.190025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.198044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.198080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.206056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.206091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.214057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.214091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.222110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.222144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.230121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.230155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.238120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.238153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.246202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.246236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.254174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.254208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.262220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.262256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.270249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.270284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.278252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.278286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.286287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.286322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.294309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.294344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.302310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.302342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.310360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.310402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.318004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.539 [2024-10-13 19:40:15.318361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.318403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.326416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.326451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.334482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.334532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.342552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.342619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.539 [2024-10-13 19:40:15.350495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.539 [2024-10-13 19:40:15.350541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.358524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.358560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.366507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.366542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.374565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.374600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.382558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.382595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.390623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.390657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.398628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.398662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.406623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.406656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.414677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.414711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.422688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.422722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.430691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.430725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.438790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.438824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.446736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.446770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.454788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.454822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.459694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.797 [2024-10-13 19:40:15.462805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.462839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.470804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.470838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.478949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.479005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.486986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.487044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.494877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.494910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.502935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.502969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.510926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.510959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.797 [2024-10-13 19:40:15.518977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.797 [2024-10-13 19:40:15.519011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.526997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.527031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.535020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.535054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.543045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.543079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.551137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.551189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.559143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.559198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.567221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.567282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.575199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.575257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.583240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.583293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.591180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.591215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.599177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.599211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.798 [2024-10-13 19:40:15.607237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.798 [2024-10-13 19:40:15.607272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.615250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.615284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.623264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.623298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.631324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.631358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.639296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.639339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.647343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.647377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.655378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.655423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.663370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.663413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.671423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.671457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.679442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.679476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.687449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.687482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.695494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.695528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.703506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.703540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.711625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.711680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.719641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.719709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.727703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.727761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.735618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.735652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.743627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.743672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.751623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.751658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.759672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.759706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.767670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.767703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.775721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.775756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.783736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.783770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.791744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.791789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.056 [2024-10-13 19:40:15.799779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.056 [2024-10-13 19:40:15.799814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.057 [2024-10-13 19:40:15.807800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.057 [2024-10-13 19:40:15.807834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.057 [2024-10-13 19:40:15.815801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.057 [2024-10-13 19:40:15.815836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.057 [2024-10-13 19:40:15.823932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.057 [2024-10-13 19:40:15.823966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.057 [2024-10-13 19:40:15.831850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.057 [2024-10-13 19:40:15.831886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.057 [2024-10-13 19:40:15.839895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.057 [2024-10-13 19:40:15.839930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.057 [2024-10-13 19:40:15.847921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.057 [2024-10-13 19:40:15.847956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.057 [2024-10-13 19:40:15.855976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.057 [2024-10-13 19:40:15.856014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.057 [2024-10-13 19:40:15.863981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.057 [2024-10-13 19:40:15.864018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.872008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.872045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.880037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.880076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.888052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.888089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.896052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.896087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.904101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.904135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.912137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.912172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.920118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.920152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.928177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.928215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.936192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.936234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.944190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.944233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.952238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.952273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.960239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.960286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.968285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.968319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.976312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.976349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.984311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.984347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:15.992362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:15.992407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.000388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.000432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.008415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.008449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.016448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.016484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.024443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.024476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.071662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.071703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.076619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.076655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.084607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.084641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 Running I/O for 5 seconds... 00:10:26.315 [2024-10-13 19:40:16.098297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.098340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.112909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.315 [2024-10-13 19:40:16.112950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.315 [2024-10-13 19:40:16.127594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.316 [2024-10-13 19:40:16.127634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.143237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.143277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.157458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.157497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.172451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.172491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.187067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.187107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.201731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.201772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.217234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.217273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.232133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.232186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.246819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.246858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.261663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.261703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.276632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.276673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.291985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.292026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.307450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.307491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.322021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.322061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.336903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.574 [2024-10-13 19:40:16.336943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.574 [2024-10-13 19:40:16.351482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.575 [2024-10-13 19:40:16.351522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.575 [2024-10-13 19:40:16.366517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.575 [2024-10-13 19:40:16.366566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.575 [2024-10-13 19:40:16.381663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.575 [2024-10-13 19:40:16.381703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.396639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.396684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.411849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.411890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.427028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.427069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.441562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.441603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.456824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.456864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.471337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.471377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.486263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.486304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.501322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.501362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.516085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.516126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.531256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.531297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.546323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.546364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.561640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.561679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.574215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.574256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.588618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.588659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.602863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.602902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.617567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.617607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.632857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.632898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.833 [2024-10-13 19:40:16.647491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.833 [2024-10-13 19:40:16.647532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.662941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.662982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.678010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.678051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.693553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.693594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.708913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.708954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.724006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.724048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.737917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.737958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.753120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.753161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.768210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.768250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.783351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.783401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.798717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.798758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.814116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.814157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.829206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.829246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.843995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.844036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.859039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.859080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.873630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.873670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.888983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.889023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.092 [2024-10-13 19:40:16.904089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.092 [2024-10-13 19:40:16.904130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:16.919058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:16.919098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:16.934297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:16.934336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:16.946810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:16.946849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:16.961182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:16.961221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:16.976155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:16.976196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:16.991363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:16.991417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.006512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.006552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.021240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.021282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.035637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.035688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.051281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.051322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.066282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.066322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.081692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.081732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 8431.00 IOPS, 65.87 MiB/s [2024-10-13T17:40:17.166Z] [2024-10-13 19:40:17.097238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.097281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.112418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.112467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.127885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.127926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.143145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.143185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.351 [2024-10-13 19:40:17.158963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.351 [2024-10-13 19:40:17.159005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.174377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.174427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.189200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.189255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.205215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.205256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.220698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.220738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.235652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.235693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.250564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.250604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.265331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.265372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.280433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.280474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.295627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.295679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.310804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.310845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.325917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.325957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.341275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.341316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.356499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.356550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.371646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.371689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.387007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.387049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.401917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.401958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.610 [2024-10-13 19:40:17.416819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.610 [2024-10-13 19:40:17.416861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.432016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.432059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.447417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.447458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.462363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.462414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.477024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.477065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.491966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.492008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.507238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.507279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.522635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.522676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.537466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.537507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.552212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.552252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.567500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.567541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.583611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.583663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.599369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.599421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.614748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.614788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.629980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.630019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.645640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.645682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.661492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.661532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.868 [2024-10-13 19:40:17.676808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.868 [2024-10-13 19:40:17.676848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.126 [2024-10-13 19:40:17.692227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.126 [2024-10-13 19:40:17.692270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.126 [2024-10-13 19:40:17.708372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.126 [2024-10-13 19:40:17.708423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.126 [2024-10-13 19:40:17.723922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.723963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.739391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.739443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.753674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.753715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.769170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.769211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.783718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.783760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.799313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.799354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.813511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.813551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.828232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.828272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.842598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.842640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.857664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.857704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.872451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.872505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.887856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.887898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.902252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.902294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.916995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.917036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.127 [2024-10-13 19:40:17.931602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.127 [2024-10-13 19:40:17.931643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:17.946461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:17.946501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:17.961560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:17.961601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:17.976378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:17.976440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:17.991790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:17.991831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.006454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.006494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.021547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.021588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.036434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.036474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.050852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.050893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.066655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.066695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.081826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.081866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 8414.50 IOPS, 65.74 MiB/s [2024-10-13T17:40:18.199Z] [2024-10-13 19:40:18.096646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.096687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.111558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.111598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.126123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.126164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.141186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.141226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.156576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.156632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.169326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.169367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.184454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.184494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.384 [2024-10-13 19:40:18.199543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.384 [2024-10-13 19:40:18.199584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.642 [2024-10-13 19:40:18.215479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.642 [2024-10-13 19:40:18.215521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.642 [2024-10-13 19:40:18.230823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.642 [2024-10-13 19:40:18.230866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.642 [2024-10-13 19:40:18.245278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.642 [2024-10-13 19:40:18.245319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.642 [2024-10-13 19:40:18.260183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.642 [2024-10-13 19:40:18.260224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.642 [2024-10-13 19:40:18.274560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.642 [2024-10-13 19:40:18.274600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.642 [2024-10-13 19:40:18.289982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.642 [2024-10-13 19:40:18.290022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.642 [2024-10-13 19:40:18.304707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.304748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.319616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.319656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.334830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.334870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.350694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.350735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.366056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.366097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.377756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.377797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.393137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.393178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.408022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.408062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.423699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.423739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.438560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.438602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.643 [2024-10-13 19:40:18.454131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.643 [2024-10-13 19:40:18.454171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.900 [2024-10-13 19:40:18.468964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.469006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.483653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.483693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.498920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.498961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.512461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.512502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.527767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.527807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.542390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.542438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.557339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.557380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.572541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.572582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.585936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.585976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.600801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.600842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.616017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.616059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.631573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.631615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.646485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.646526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.661840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.661881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.677483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.677523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.692809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.692849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.901 [2024-10-13 19:40:18.708914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.901 [2024-10-13 19:40:18.708965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.724642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.724682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.740725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.740766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.756136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.756176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.771077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.771117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.786385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.786435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.802121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.802162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.814993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.815034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.830290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.830331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.845370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.845426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.860364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.860414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.875084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.875124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.890911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.890952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.906847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.906888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.921669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.921710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.936217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.936257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.951540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.951580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.159 [2024-10-13 19:40:18.966451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.159 [2024-10-13 19:40:18.966493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:18.982128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:18.982169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:18.997124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:18.997175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.012170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.012211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.027080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.027121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.041840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.041880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.056753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.056793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.071546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.071588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.086644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.086684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 8408.33 IOPS, 65.69 MiB/s [2024-10-13T17:40:19.233Z] [2024-10-13 19:40:19.101145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.101186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.116103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.116158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.131125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.131165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.146129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.146168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.160949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.160988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.176093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.176132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.190837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.190887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.205961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.206002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.418 [2024-10-13 19:40:19.220838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.418 [2024-10-13 19:40:19.220878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.676 [2024-10-13 19:40:19.235771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.676 [2024-10-13 19:40:19.235811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.676 [2024-10-13 19:40:19.250693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.676 [2024-10-13 19:40:19.250733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.676 [2024-10-13 19:40:19.265820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.676 [2024-10-13 19:40:19.265860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.676 [2024-10-13 19:40:19.281478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.676 [2024-10-13 19:40:19.281537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.676 [2024-10-13 19:40:19.296779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.676 [2024-10-13 19:40:19.296821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.312286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.312326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.328044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.328086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.343357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.343408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.358857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.358897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.372121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.372161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.387318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.387358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.402016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.402059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.417030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.417073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.433343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.433385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.448762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.448802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.463783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.463823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.677 [2024-10-13 19:40:19.479531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.677 [2024-10-13 19:40:19.479572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.494722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.494762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.509354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.509405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.524273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.524314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.539257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.539298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.554334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.554375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.569176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.569228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.584401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.584441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.599603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.599644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.614382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.614432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.629490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.629531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.644634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.644675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.659680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.659721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.674558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.674599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.689165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.689207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.703544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.703585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.718511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.718552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.733667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.733707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.935 [2024-10-13 19:40:19.748695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.935 [2024-10-13 19:40:19.748735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.761315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.761355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.775735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.775776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.790310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.790350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.805269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.805309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.821073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.821113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.836170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.836211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.851141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.851181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.866841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.866881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.881866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.881908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.897082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.897123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.912211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.912253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.927177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.927217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.942169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.942209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.957418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.957458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.970423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.970465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:19.985382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:19.985431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.194 [2024-10-13 19:40:20.000951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.194 [2024-10-13 19:40:20.000992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 [2024-10-13 19:40:20.016459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.016520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 [2024-10-13 19:40:20.032351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.032425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 [2024-10-13 19:40:20.046822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.046874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 [2024-10-13 19:40:20.062057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.062099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 [2024-10-13 19:40:20.076994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.077051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 [2024-10-13 19:40:20.092491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.092533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 8401.00 IOPS, 65.63 MiB/s [2024-10-13T17:40:20.267Z] [2024-10-13 19:40:20.108001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.108042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 [2024-10-13 19:40:20.122933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.122975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.452 [2024-10-13 19:40:20.138391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.452 [2024-10-13 19:40:20.138534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.453 [2024-10-13 19:40:20.153733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.453 [2024-10-13 19:40:20.153774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.453 [2024-10-13 19:40:20.168249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.453 [2024-10-13 19:40:20.168290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.453 [2024-10-13 19:40:20.183429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.453 [2024-10-13 19:40:20.183470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.453 [2024-10-13 19:40:20.198777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.453 [2024-10-13 19:40:20.198818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.453 [2024-10-13 19:40:20.213591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.453 [2024-10-13 19:40:20.213631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.453 [2024-10-13 19:40:20.228518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.453 [2024-10-13 19:40:20.228557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.453 [2024-10-13 19:40:20.242977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.453 [2024-10-13 19:40:20.243017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.453 [2024-10-13 19:40:20.257489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.453 [2024-10-13 19:40:20.257529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.271964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.272006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.287372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.287423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.302826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.302867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.318522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.318561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.329928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.329969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.345167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.345218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.360586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.360627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.376121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.376161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.391712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.391760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.407152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.407204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.422179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.422221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.437416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.437466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.452616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.452658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.468171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.468213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.480863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.480904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.495734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.495774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.510779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.510818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.711 [2024-10-13 19:40:20.525617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.711 [2024-10-13 19:40:20.525657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.539987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.540029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.555339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.555379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.570457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.570498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.585947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.585989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.600887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.600927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.615284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.615324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.630367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.630417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.644992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.645033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.659198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.659239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.674192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.674232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.969 [2024-10-13 19:40:20.689298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.969 [2024-10-13 19:40:20.689348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.970 [2024-10-13 19:40:20.701873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.970 [2024-10-13 19:40:20.701912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.970 [2024-10-13 19:40:20.716347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.970 [2024-10-13 19:40:20.716387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.970 [2024-10-13 19:40:20.731025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.970 [2024-10-13 19:40:20.731065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.970 [2024-10-13 19:40:20.746024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.970 [2024-10-13 19:40:20.746066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.970 [2024-10-13 19:40:20.761078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.970 [2024-10-13 19:40:20.761117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.970 [2024-10-13 19:40:20.775734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.970 [2024-10-13 19:40:20.775774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.790374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.790424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.805210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.805250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.819975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.820015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.835996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.836036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.851175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.851216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.866172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.866214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.880603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.880644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.895425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.895466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.910569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.910610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.925596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.925638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.940838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.940879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.956136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.956177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.971056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.971107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:20.985810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:20.985850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:21.000987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:21.001028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:21.016283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:21.016323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.228 [2024-10-13 19:40:21.031192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.228 [2024-10-13 19:40:21.031246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.487 [2024-10-13 19:40:21.046259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.487 [2024-10-13 19:40:21.046300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.487 [2024-10-13 19:40:21.061071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.487 [2024-10-13 19:40:21.061113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.487 [2024-10-13 19:40:21.075745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.487 [2024-10-13 19:40:21.075785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.487 [2024-10-13 19:40:21.090369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.487 [2024-10-13 19:40:21.090420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.487 8420.00 IOPS, 65.78 MiB/s [2024-10-13T17:40:21.302Z] [2024-10-13 19:40:21.104999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.487 [2024-10-13 19:40:21.105040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.487 [2024-10-13 19:40:21.114777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.487 [2024-10-13 19:40:21.114817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.487 00:10:31.487 Latency(us) 00:10:31.487 [2024-10-13T17:40:21.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.487 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:31.488 Nvme1n1 : 5.01 8421.28 65.79 0.00 0.00 15174.20 4514.70 24855.13 00:10:31.488 [2024-10-13T17:40:21.303Z] =================================================================================================================== 00:10:31.488 [2024-10-13T17:40:21.303Z] Total : 8421.28 65.79 0.00 0.00 15174.20 4514.70 24855.13 00:10:31.488 [2024-10-13 19:40:21.120417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.120454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.128434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.128471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.136420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.136457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.144470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.144506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.152494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.152530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.160560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.160595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.168674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.168739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.176683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.176752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.184727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.184793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.192606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.192641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.200602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.200636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.208650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.208685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.216663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.216698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.224698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.224732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.232733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.232767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.240728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.240762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.248787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.248823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.256795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.256830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.264901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.264964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.272990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.273060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.280972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.281037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.288900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.288934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.488 [2024-10-13 19:40:21.296909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.488 [2024-10-13 19:40:21.296943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.304955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.304989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.312957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.312991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.320977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.321011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.329003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.329039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.337028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.337064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.345050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.345085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.353072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.353108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.361100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.361135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.369088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.369121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.377143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.377177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.385162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.385197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.393163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.393198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.401207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.401242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.413256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.413292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.421263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.421298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.429291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.429326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.437432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.437491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.445488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.445543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.453366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.453422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.461353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.461403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.469422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.469457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.477413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.477447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.485457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.485491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.493493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.493529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.501599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.501661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.509702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.509765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.517717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.517781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.525566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.525600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.533595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.533630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.541592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.541625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.549640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.549683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.747 [2024-10-13 19:40:21.557661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.747 [2024-10-13 19:40:21.557698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.565665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.565698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.573715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.573749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.581730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.581784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.589729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.589774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.597792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.597827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.605803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.605846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.613826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.613868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.621870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.621914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.629852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.629887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.637927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.637961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.645919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.645964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.653919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.653953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.661976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.662009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.669977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.670010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.678056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.678094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.686168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.686241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.694065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.694104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.702098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.702133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.710121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.710156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.718111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.718144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.726161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.726195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.734193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.734227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.742208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.742241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.750251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.750286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.758242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.006 [2024-10-13 19:40:21.758277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.006 [2024-10-13 19:40:21.766284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.007 [2024-10-13 19:40:21.766328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.007 [2024-10-13 19:40:21.774302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.007 [2024-10-13 19:40:21.774336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.007 [2024-10-13 19:40:21.782307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.007 [2024-10-13 19:40:21.782341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.007 [2024-10-13 19:40:21.790345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.007 [2024-10-13 19:40:21.790379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.007 [2024-10-13 19:40:21.798351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.007 [2024-10-13 19:40:21.798385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.007 [2024-10-13 19:40:21.806436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.007 [2024-10-13 19:40:21.806478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.007 [2024-10-13 19:40:21.814599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.007 [2024-10-13 19:40:21.814665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.822430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.822465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.830493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.830528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.838495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.838529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.846494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.846528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.854541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.854576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.862535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.862570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.870577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.870612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.878602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.878636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.886608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.886641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.894659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.894693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.902727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.902773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.910800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.910865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.918739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.918786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.926759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.926793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.934770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.934804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.942822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.942856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.950788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.950822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.958834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.265 [2024-10-13 19:40:21.958867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.265 [2024-10-13 19:40:21.966870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:21.966904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:21.974863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:21.974897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:21.982932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:21.982966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:21.990909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:21.990943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:21.998954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:21.998988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:22.007003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:22.007043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:22.014981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:22.015016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:22.023044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:22.023079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:22.031046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:22.031081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:22.039046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:22.039079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:22.047091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:22.047125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 [2024-10-13 19:40:22.055090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.266 [2024-10-13 19:40:22.055123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2897967) - No such process 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2897967 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.266 delay0 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.266 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.524 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.524 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:32.524 [2024-10-13 19:40:22.195405] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:40.638 Initializing NVMe Controllers 00:10:40.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:40.638 Initialization complete. Launching workers. 00:10:40.638 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 218, failed: 19444 00:10:40.638 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19530, failed to submit 132 00:10:40.638 success 19455, unsuccessful 75, failed 0 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.638 rmmod nvme_tcp 00:10:40.638 rmmod nvme_fabrics 00:10:40.638 rmmod nvme_keyring 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:40.638 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2896367 ']' 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2896367 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2896367 ']' 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2896367 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2896367 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2896367' 00:10:40.639 killing process with pid 2896367 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2896367 00:10:40.639 19:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2896367 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.897 19:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.435 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.435 00:10:43.435 real 0m32.933s 00:10:43.435 user 0m49.151s 00:10:43.435 sys 0m8.801s 00:10:43.435 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.435 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.435 ************************************ 00:10:43.435 END TEST nvmf_zcopy 00:10:43.435 ************************************ 00:10:43.435 19:40:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.435 19:40:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.435 19:40:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.436 ************************************ 00:10:43.436 START TEST nvmf_nmic 00:10:43.436 ************************************ 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.436 * Looking for test storage... 00:10:43.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:43.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.436 --rc genhtml_branch_coverage=1 00:10:43.436 --rc genhtml_function_coverage=1 00:10:43.436 --rc genhtml_legend=1 00:10:43.436 --rc geninfo_all_blocks=1 00:10:43.436 --rc geninfo_unexecuted_blocks=1 00:10:43.436 00:10:43.436 ' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:43.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.436 --rc genhtml_branch_coverage=1 00:10:43.436 --rc genhtml_function_coverage=1 00:10:43.436 --rc genhtml_legend=1 00:10:43.436 --rc geninfo_all_blocks=1 00:10:43.436 --rc geninfo_unexecuted_blocks=1 00:10:43.436 00:10:43.436 ' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:43.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.436 --rc genhtml_branch_coverage=1 00:10:43.436 --rc genhtml_function_coverage=1 00:10:43.436 --rc genhtml_legend=1 00:10:43.436 --rc geninfo_all_blocks=1 00:10:43.436 --rc geninfo_unexecuted_blocks=1 00:10:43.436 00:10:43.436 ' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:43.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.436 --rc genhtml_branch_coverage=1 00:10:43.436 --rc genhtml_function_coverage=1 00:10:43.436 --rc genhtml_legend=1 00:10:43.436 --rc geninfo_all_blocks=1 00:10:43.436 --rc geninfo_unexecuted_blocks=1 00:10:43.436 00:10:43.436 ' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:43.436 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.437 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:45.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.402 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:45.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:45.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:45.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:10:45.403 00:10:45.403 --- 10.0.0.2 ping statistics --- 00:10:45.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.403 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:10:45.403 00:10:45.403 --- 10.0.0.1 ping statistics --- 00:10:45.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.403 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2901681 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2901681 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2901681 ']' 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.403 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.662 [2024-10-13 19:40:35.288005] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:10:45.662 [2024-10-13 19:40:35.288147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.662 [2024-10-13 19:40:35.435878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.920 [2024-10-13 19:40:35.583255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.920 [2024-10-13 19:40:35.583341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.920 [2024-10-13 19:40:35.583371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.920 [2024-10-13 19:40:35.583411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.920 [2024-10-13 19:40:35.583435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.920 [2024-10-13 19:40:35.586310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.920 [2024-10-13 19:40:35.586370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.920 [2024-10-13 19:40:35.586450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.920 [2024-10-13 19:40:35.586454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.486 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.486 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:46.486 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:46.486 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.486 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 [2024-10-13 19:40:36.320801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 Malloc0 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 [2024-10-13 19:40:36.448649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:46.744 test case1: single bdev can't be used in multiple subsystems 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 [2024-10-13 19:40:36.472318] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:46.744 [2024-10-13 19:40:36.472357] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:46.744 [2024-10-13 19:40:36.472417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.744 request: 00:10:46.744 { 00:10:46.744 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:46.744 "namespace": { 00:10:46.744 "bdev_name": "Malloc0", 00:10:46.744 "no_auto_visible": false 00:10:46.744 }, 00:10:46.744 "method": "nvmf_subsystem_add_ns", 00:10:46.744 "req_id": 1 00:10:46.744 } 00:10:46.744 Got JSON-RPC error response 00:10:46.744 response: 00:10:46.744 { 00:10:46.744 "code": -32602, 00:10:46.744 "message": "Invalid parameters" 00:10:46.744 } 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:46.744 Adding namespace failed - expected result. 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:46.744 test case2: host connect to nvmf target in multiple paths 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.744 [2024-10-13 19:40:36.480518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.744 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.678 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:48.243 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.243 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:48.243 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.243 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:48.243 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:50.140 19:40:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:50.140 19:40:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:50.140 19:40:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.140 19:40:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:50.140 19:40:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.140 19:40:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:50.140 19:40:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:50.140 [global] 00:10:50.140 thread=1 00:10:50.140 invalidate=1 00:10:50.140 rw=write 00:10:50.140 time_based=1 00:10:50.140 runtime=1 00:10:50.140 ioengine=libaio 00:10:50.140 direct=1 00:10:50.140 bs=4096 00:10:50.140 iodepth=1 00:10:50.140 norandommap=0 00:10:50.140 numjobs=1 00:10:50.140 00:10:50.140 verify_dump=1 00:10:50.140 verify_backlog=512 00:10:50.140 verify_state_save=0 00:10:50.140 do_verify=1 00:10:50.140 verify=crc32c-intel 00:10:50.140 [job0] 00:10:50.140 filename=/dev/nvme0n1 00:10:50.140 Could not set queue depth (nvme0n1) 00:10:50.398 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.398 fio-3.35 00:10:50.398 Starting 1 thread 00:10:51.770 00:10:51.770 job0: (groupid=0, jobs=1): err= 0: pid=2902414: Sun Oct 13 19:40:41 2024 00:10:51.770 read: IOPS=23, BW=92.4KiB/s (94.6kB/s)(96.0KiB/1039msec) 00:10:51.770 slat (nsec): min=5807, max=35844, avg=25348.04, stdev=9511.33 00:10:51.770 clat (usec): min=449, max=41973, avg=39282.37, stdev=8275.10 00:10:51.771 lat (usec): min=470, max=41990, avg=39307.72, stdev=8276.11 00:10:51.771 clat percentiles (usec): 00:10:51.771 | 1.00th=[ 449], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:51.771 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:51.771 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:51.771 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:51.771 | 99.99th=[42206] 00:10:51.771 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:10:51.771 slat (nsec): min=5137, max=26864, avg=6592.91, stdev=2321.60 00:10:51.771 clat (usec): min=153, max=417, avg=178.30, stdev=17.34 00:10:51.771 lat (usec): min=163, max=444, avg=184.90, stdev=18.05 00:10:51.771 clat percentiles (usec): 00:10:51.771 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:10:51.771 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:10:51.771 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:10:51.771 | 99.00th=[ 225], 99.50th=[ 243], 99.90th=[ 416], 99.95th=[ 416], 00:10:51.771 | 99.99th=[ 416] 00:10:51.771 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.771 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.771 lat (usec) : 250=95.15%, 500=0.56% 00:10:51.771 lat (msec) : 50=4.29% 00:10:51.771 cpu : usr=0.00%, sys=0.58%, ctx=536, majf=0, minf=1 00:10:51.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.771 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.771 00:10:51.771 Run status group 0 (all jobs): 00:10:51.771 READ: bw=92.4KiB/s (94.6kB/s), 92.4KiB/s-92.4KiB/s (94.6kB/s-94.6kB/s), io=96.0KiB (98.3kB), run=1039-1039msec 00:10:51.771 WRITE: bw=1971KiB/s (2018kB/s), 1971KiB/s-1971KiB/s (2018kB/s-2018kB/s), io=2048KiB (2097kB), run=1039-1039msec 00:10:51.771 00:10:51.771 Disk stats (read/write): 00:10:51.771 nvme0n1: ios=70/512, merge=0/0, ticks=805/88, in_queue=893, util=91.68% 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.771 rmmod nvme_tcp 00:10:51.771 rmmod nvme_fabrics 00:10:51.771 rmmod nvme_keyring 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2901681 ']' 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2901681 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2901681 ']' 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2901681 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.771 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2901681 00:10:52.029 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:52.029 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:52.029 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2901681' 00:10:52.029 killing process with pid 2901681 00:10:52.029 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2901681 00:10:52.029 19:40:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2901681 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.404 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.312 19:40:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:55.312 00:10:55.312 real 0m12.143s 00:10:55.312 user 0m29.200s 00:10:55.312 sys 0m2.645s 00:10:55.312 19:40:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.312 19:40:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.312 ************************************ 00:10:55.312 END TEST nvmf_nmic 00:10:55.312 ************************************ 00:10:55.312 19:40:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:55.312 19:40:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.312 19:40:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.312 19:40:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.312 ************************************ 00:10:55.312 START TEST nvmf_fio_target 00:10:55.312 ************************************ 00:10:55.312 19:40:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:55.312 * Looking for test storage... 00:10:55.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.312 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.312 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.312 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.571 --rc genhtml_branch_coverage=1 00:10:55.571 --rc genhtml_function_coverage=1 00:10:55.571 --rc genhtml_legend=1 00:10:55.571 --rc geninfo_all_blocks=1 00:10:55.571 --rc geninfo_unexecuted_blocks=1 00:10:55.571 00:10:55.571 ' 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.571 --rc genhtml_branch_coverage=1 00:10:55.571 --rc genhtml_function_coverage=1 00:10:55.571 --rc genhtml_legend=1 00:10:55.571 --rc geninfo_all_blocks=1 00:10:55.571 --rc geninfo_unexecuted_blocks=1 00:10:55.571 00:10:55.571 ' 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.571 --rc genhtml_branch_coverage=1 00:10:55.571 --rc genhtml_function_coverage=1 00:10:55.571 --rc genhtml_legend=1 00:10:55.571 --rc geninfo_all_blocks=1 00:10:55.571 --rc geninfo_unexecuted_blocks=1 00:10:55.571 00:10:55.571 ' 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.571 --rc genhtml_branch_coverage=1 00:10:55.571 --rc genhtml_function_coverage=1 00:10:55.571 --rc genhtml_legend=1 00:10:55.571 --rc geninfo_all_blocks=1 00:10:55.571 --rc geninfo_unexecuted_blocks=1 00:10:55.571 00:10:55.571 ' 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.571 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.572 19:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:57.474 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.475 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.475 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:10:57.475 00:10:57.475 --- 10.0.0.2 ping statistics --- 00:10:57.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.475 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:10:57.475 00:10:57.475 --- 10.0.0.1 ping statistics --- 00:10:57.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.475 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2904631 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2904631 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2904631 ']' 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.475 19:40:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.734 [2024-10-13 19:40:47.296897] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:10:57.734 [2024-10-13 19:40:47.297072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.734 [2024-10-13 19:40:47.442690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.992 [2024-10-13 19:40:47.588824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.992 [2024-10-13 19:40:47.588915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.992 [2024-10-13 19:40:47.588940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.992 [2024-10-13 19:40:47.588964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.992 [2024-10-13 19:40:47.588992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.992 [2024-10-13 19:40:47.592101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.992 [2024-10-13 19:40:47.592162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.992 [2024-10-13 19:40:47.592217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.992 [2024-10-13 19:40:47.592224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.558 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.558 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:58.558 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:58.558 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.558 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.558 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.558 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.814 [2024-10-13 19:40:48.539033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.814 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.379 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:59.379 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.636 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:59.636 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.894 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:59.894 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.152 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:00.152 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:00.717 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.975 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:00.975 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.233 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:01.233 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.491 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:01.491 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:01.748 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.006 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:02.006 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.265 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:02.265 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.830 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.830 [2024-10-13 19:40:52.601644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.830 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:03.088 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:03.345 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.279 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:04.279 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:04.279 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.279 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:04.279 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:04.279 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:06.177 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:06.177 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:06.177 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.177 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:06.177 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.177 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:06.177 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:06.177 [global] 00:11:06.177 thread=1 00:11:06.177 invalidate=1 00:11:06.177 rw=write 00:11:06.177 time_based=1 00:11:06.177 runtime=1 00:11:06.177 ioengine=libaio 00:11:06.177 direct=1 00:11:06.177 bs=4096 00:11:06.177 iodepth=1 00:11:06.177 norandommap=0 00:11:06.177 numjobs=1 00:11:06.177 00:11:06.177 verify_dump=1 00:11:06.177 verify_backlog=512 00:11:06.177 verify_state_save=0 00:11:06.177 do_verify=1 00:11:06.177 verify=crc32c-intel 00:11:06.177 [job0] 00:11:06.177 filename=/dev/nvme0n1 00:11:06.177 [job1] 00:11:06.177 filename=/dev/nvme0n2 00:11:06.177 [job2] 00:11:06.177 filename=/dev/nvme0n3 00:11:06.177 [job3] 00:11:06.177 filename=/dev/nvme0n4 00:11:06.177 Could not set queue depth (nvme0n1) 00:11:06.177 Could not set queue depth (nvme0n2) 00:11:06.177 Could not set queue depth (nvme0n3) 00:11:06.177 Could not set queue depth (nvme0n4) 00:11:06.435 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.435 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.435 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.435 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.435 fio-3.35 00:11:06.435 Starting 4 threads 00:11:07.808 00:11:07.808 job0: (groupid=0, jobs=1): err= 0: pid=2905845: Sun Oct 13 19:40:57 2024 00:11:07.808 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:07.808 slat (nsec): min=6098, max=48956, avg=12614.59, stdev=5992.10 00:11:07.808 clat (usec): min=231, max=40992, avg=302.15, stdev=1039.09 00:11:07.808 lat (usec): min=239, max=40999, avg=314.77, stdev=1038.99 00:11:07.808 clat percentiles (usec): 00:11:07.808 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:11:07.808 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:11:07.808 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:11:07.808 | 99.00th=[ 326], 99.50th=[ 396], 99.90th=[ 424], 99.95th=[41157], 00:11:07.808 | 99.99th=[41157] 00:11:07.808 write: IOPS=1931, BW=7724KiB/s (7910kB/s)(7732KiB/1001msec); 0 zone resets 00:11:07.808 slat (nsec): min=7975, max=78688, avg=18291.41, stdev=8149.17 00:11:07.808 clat (usec): min=185, max=1332, avg=240.88, stdev=45.84 00:11:07.808 lat (usec): min=195, max=1344, avg=259.18, stdev=48.76 00:11:07.808 clat percentiles (usec): 00:11:07.808 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 219], 00:11:07.808 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:11:07.808 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:11:07.808 | 99.00th=[ 314], 99.50th=[ 429], 99.90th=[ 914], 99.95th=[ 1336], 00:11:07.808 | 99.99th=[ 1336] 00:11:07.808 bw ( KiB/s): min= 8192, max= 8192, per=34.55%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.808 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.808 lat (usec) : 250=42.87%, 500=56.88%, 750=0.12%, 1000=0.09% 00:11:07.808 lat (msec) : 2=0.03%, 50=0.03% 00:11:07.808 cpu : usr=3.90%, sys=7.30%, ctx=3470, majf=0, minf=1 00:11:07.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.808 issued rwts: total=1536,1933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.808 job1: (groupid=0, jobs=1): err= 0: pid=2905846: Sun Oct 13 19:40:57 2024 00:11:07.808 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:11:07.808 slat (nsec): min=8838, max=19832, avg=13390.77, stdev=1780.42 00:11:07.808 clat (usec): min=356, max=41074, avg=39114.90, stdev=8657.28 00:11:07.808 lat (usec): min=369, max=41089, avg=39128.29, stdev=8657.38 00:11:07.808 clat percentiles (usec): 00:11:07.808 | 1.00th=[ 359], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:07.808 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.808 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:07.808 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:07.808 | 99.99th=[41157] 00:11:07.808 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:11:07.808 slat (nsec): min=8319, max=70607, avg=21785.86, stdev=11097.01 00:11:07.808 clat (usec): min=187, max=589, avg=301.16, stdev=58.64 00:11:07.808 lat (usec): min=205, max=612, avg=322.95, stdev=53.58 00:11:07.808 clat percentiles (usec): 00:11:07.808 | 1.00th=[ 202], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 253], 00:11:07.808 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 297], 00:11:07.808 | 70.00th=[ 322], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 404], 00:11:07.808 | 99.00th=[ 445], 99.50th=[ 478], 99.90th=[ 586], 99.95th=[ 586], 00:11:07.808 | 99.99th=[ 586] 00:11:07.808 bw ( KiB/s): min= 4096, max= 4096, per=17.27%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.808 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.808 lat (usec) : 250=16.48%, 500=79.21%, 750=0.37% 00:11:07.808 lat (msec) : 50=3.93% 00:11:07.808 cpu : usr=1.17%, sys=0.88%, ctx=535, majf=0, minf=2 00:11:07.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.808 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.808 job2: (groupid=0, jobs=1): err= 0: pid=2905847: Sun Oct 13 19:40:57 2024 00:11:07.808 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:07.808 slat (nsec): min=6215, max=50506, avg=12689.11, stdev=6605.54 00:11:07.808 clat (usec): min=245, max=40735, avg=322.14, stdev=1032.09 00:11:07.808 lat (usec): min=252, max=40752, avg=334.82, stdev=1032.27 00:11:07.808 clat percentiles (usec): 00:11:07.808 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:11:07.808 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:11:07.808 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:11:07.808 | 99.00th=[ 351], 99.50th=[ 408], 99.90th=[ 474], 99.95th=[40633], 00:11:07.808 | 99.99th=[40633] 00:11:07.808 write: IOPS=1880, BW=7520KiB/s (7701kB/s)(7528KiB/1001msec); 0 zone resets 00:11:07.808 slat (nsec): min=8045, max=57074, avg=18403.40, stdev=8163.87 00:11:07.808 clat (usec): min=184, max=1006, avg=231.86, stdev=35.56 00:11:07.808 lat (usec): min=194, max=1018, avg=250.26, stdev=36.92 00:11:07.808 clat percentiles (usec): 00:11:07.808 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:11:07.808 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:11:07.808 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:11:07.808 | 99.00th=[ 289], 99.50th=[ 355], 99.90th=[ 988], 99.95th=[ 1004], 00:11:07.808 | 99.99th=[ 1004] 00:11:07.808 bw ( KiB/s): min= 8192, max= 8192, per=34.55%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.808 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.808 lat (usec) : 250=49.21%, 500=50.61%, 750=0.09%, 1000=0.03% 00:11:07.808 lat (msec) : 2=0.03%, 50=0.03% 00:11:07.808 cpu : usr=4.10%, sys=6.90%, ctx=3419, majf=0, minf=2 00:11:07.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.808 issued rwts: total=1536,1882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.808 job3: (groupid=0, jobs=1): err= 0: pid=2905848: Sun Oct 13 19:40:57 2024 00:11:07.808 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:07.808 slat (nsec): min=5959, max=55655, avg=12254.01, stdev=6223.30 00:11:07.808 clat (usec): min=248, max=1042, avg=307.67, stdev=50.72 00:11:07.808 lat (usec): min=255, max=1061, avg=319.93, stdev=52.64 00:11:07.808 clat percentiles (usec): 00:11:07.808 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:11:07.808 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:11:07.809 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 404], 00:11:07.809 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 914], 99.95th=[ 1045], 00:11:07.809 | 99.99th=[ 1045] 00:11:07.809 write: IOPS=1771, BW=7085KiB/s (7255kB/s)(7092KiB/1001msec); 0 zone resets 00:11:07.809 slat (nsec): min=7966, max=68244, avg=19523.41, stdev=9257.23 00:11:07.809 clat (usec): min=186, max=920, avg=258.76, stdev=48.31 00:11:07.809 lat (usec): min=196, max=929, avg=278.28, stdev=48.82 00:11:07.809 clat percentiles (usec): 00:11:07.809 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 227], 00:11:07.809 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 260], 00:11:07.809 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 351], 00:11:07.809 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 734], 99.95th=[ 922], 00:11:07.809 | 99.99th=[ 922] 00:11:07.809 bw ( KiB/s): min= 8192, max= 8192, per=34.55%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.809 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.809 lat (usec) : 250=26.35%, 500=73.28%, 750=0.24%, 1000=0.09% 00:11:07.809 lat (msec) : 2=0.03% 00:11:07.809 cpu : usr=4.40%, sys=6.50%, ctx=3310, majf=0, minf=1 00:11:07.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.809 issued rwts: total=1536,1773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.809 00:11:07.809 Run status group 0 (all jobs): 00:11:07.809 READ: bw=17.6MiB/s (18.4MB/s), 85.5KiB/s-6138KiB/s (87.6kB/s-6285kB/s), io=18.1MiB (19.0MB), run=1001-1029msec 00:11:07.809 WRITE: bw=23.2MiB/s (24.3MB/s), 1990KiB/s-7724KiB/s (2038kB/s-7910kB/s), io=23.8MiB (25.0MB), run=1001-1029msec 00:11:07.809 00:11:07.809 Disk stats (read/write): 00:11:07.809 nvme0n1: ios=1427/1536, merge=0/0, ticks=1190/351, in_queue=1541, util=98.00% 00:11:07.809 nvme0n2: ios=67/512, merge=0/0, ticks=1462/144, in_queue=1606, util=98.07% 00:11:07.809 nvme0n3: ios=1387/1536, merge=0/0, ticks=725/307, in_queue=1032, util=98.12% 00:11:07.809 nvme0n4: ios=1277/1536, merge=0/0, ticks=1303/367, in_queue=1670, util=98.31% 00:11:07.809 19:40:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:07.809 [global] 00:11:07.809 thread=1 00:11:07.809 invalidate=1 00:11:07.809 rw=randwrite 00:11:07.809 time_based=1 00:11:07.809 runtime=1 00:11:07.809 ioengine=libaio 00:11:07.809 direct=1 00:11:07.809 bs=4096 00:11:07.809 iodepth=1 00:11:07.809 norandommap=0 00:11:07.809 numjobs=1 00:11:07.809 00:11:07.809 verify_dump=1 00:11:07.809 verify_backlog=512 00:11:07.809 verify_state_save=0 00:11:07.809 do_verify=1 00:11:07.809 verify=crc32c-intel 00:11:07.809 [job0] 00:11:07.809 filename=/dev/nvme0n1 00:11:07.809 [job1] 00:11:07.809 filename=/dev/nvme0n2 00:11:07.809 [job2] 00:11:07.809 filename=/dev/nvme0n3 00:11:07.809 [job3] 00:11:07.809 filename=/dev/nvme0n4 00:11:07.809 Could not set queue depth (nvme0n1) 00:11:07.809 Could not set queue depth (nvme0n2) 00:11:07.809 Could not set queue depth (nvme0n3) 00:11:07.809 Could not set queue depth (nvme0n4) 00:11:07.809 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.809 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.809 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.809 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.809 fio-3.35 00:11:07.809 Starting 4 threads 00:11:09.181 00:11:09.181 job0: (groupid=0, jobs=1): err= 0: pid=2906072: Sun Oct 13 19:40:58 2024 00:11:09.181 read: IOPS=802, BW=3209KiB/s (3286kB/s)(3292KiB/1026msec) 00:11:09.181 slat (nsec): min=5456, max=41841, avg=11762.49, stdev=4841.85 00:11:09.181 clat (usec): min=220, max=41089, avg=904.68, stdev=5077.35 00:11:09.181 lat (usec): min=228, max=41107, avg=916.45, stdev=5077.96 00:11:09.181 clat percentiles (usec): 00:11:09.181 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:11:09.181 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:11:09.181 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:11:09.181 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:09.181 | 99.99th=[41157] 00:11:09.181 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:11:09.181 slat (nsec): min=6980, max=52896, avg=15437.05, stdev=6204.66 00:11:09.182 clat (usec): min=178, max=466, avg=242.20, stdev=42.53 00:11:09.182 lat (usec): min=189, max=502, avg=257.64, stdev=43.76 00:11:09.182 clat percentiles (usec): 00:11:09.182 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:11:09.182 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 237], 00:11:09.182 | 70.00th=[ 251], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 326], 00:11:09.182 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 416], 99.95th=[ 465], 00:11:09.182 | 99.99th=[ 465] 00:11:09.182 bw ( KiB/s): min= 8192, max= 8192, per=36.63%, avg=8192.00, stdev= 0.00, samples=1 00:11:09.182 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:09.182 lat (usec) : 250=51.11%, 500=48.08%, 750=0.11% 00:11:09.182 lat (msec) : 50=0.70% 00:11:09.182 cpu : usr=1.46%, sys=3.90%, ctx=1847, majf=0, minf=2 00:11:09.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.182 issued rwts: total=823,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.182 job1: (groupid=0, jobs=1): err= 0: pid=2906073: Sun Oct 13 19:40:58 2024 00:11:09.182 read: IOPS=1030, BW=4124KiB/s (4223kB/s)(4128KiB/1001msec) 00:11:09.182 slat (nsec): min=5949, max=74967, avg=18183.17, stdev=11941.31 00:11:09.182 clat (usec): min=225, max=41043, avg=555.96, stdev=2479.78 00:11:09.182 lat (usec): min=232, max=41050, avg=574.14, stdev=2479.86 00:11:09.182 clat percentiles (usec): 00:11:09.182 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:11:09.182 | 30.00th=[ 265], 40.00th=[ 302], 50.00th=[ 412], 60.00th=[ 445], 00:11:09.182 | 70.00th=[ 465], 80.00th=[ 494], 90.00th=[ 553], 95.00th=[ 603], 00:11:09.182 | 99.00th=[ 750], 99.50th=[ 848], 99.90th=[41157], 99.95th=[41157], 00:11:09.182 | 99.99th=[41157] 00:11:09.182 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:09.182 slat (nsec): min=6134, max=80758, avg=18973.65, stdev=10949.01 00:11:09.182 clat (usec): min=168, max=1201, avg=238.75, stdev=67.83 00:11:09.182 lat (usec): min=178, max=1212, avg=257.72, stdev=70.11 00:11:09.182 clat percentiles (usec): 00:11:09.182 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:11:09.182 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:11:09.182 | 70.00th=[ 237], 80.00th=[ 293], 90.00th=[ 347], 95.00th=[ 371], 00:11:09.182 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 758], 99.95th=[ 1205], 00:11:09.182 | 99.99th=[ 1205] 00:11:09.182 bw ( KiB/s): min= 8192, max= 8192, per=36.63%, avg=8192.00, stdev= 0.00, samples=1 00:11:09.182 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:09.182 lat (usec) : 250=50.08%, 500=42.91%, 750=6.54%, 1000=0.23% 00:11:09.182 lat (msec) : 2=0.04%, 50=0.19% 00:11:09.182 cpu : usr=3.00%, sys=4.40%, ctx=2569, majf=0, minf=1 00:11:09.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.182 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.182 job2: (groupid=0, jobs=1): err= 0: pid=2906075: Sun Oct 13 19:40:58 2024 00:11:09.182 read: IOPS=998, BW=3992KiB/s (4088kB/s)(4104KiB/1028msec) 00:11:09.182 slat (nsec): min=5665, max=76350, avg=19853.36, stdev=11691.03 00:11:09.182 clat (usec): min=279, max=41004, avg=530.23, stdev=1790.95 00:11:09.182 lat (usec): min=289, max=41021, avg=550.09, stdev=1790.89 00:11:09.182 clat percentiles (usec): 00:11:09.182 | 1.00th=[ 355], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 392], 00:11:09.182 | 30.00th=[ 400], 40.00th=[ 412], 50.00th=[ 429], 60.00th=[ 453], 00:11:09.182 | 70.00th=[ 474], 80.00th=[ 515], 90.00th=[ 562], 95.00th=[ 594], 00:11:09.182 | 99.00th=[ 652], 99.50th=[ 709], 99.90th=[41157], 99.95th=[41157], 00:11:09.182 | 99.99th=[41157] 00:11:09.182 write: IOPS=1494, BW=5977KiB/s (6120kB/s)(6144KiB/1028msec); 0 zone resets 00:11:09.182 slat (nsec): min=6984, max=71272, avg=18672.05, stdev=9552.07 00:11:09.182 clat (usec): min=183, max=478, avg=273.79, stdev=57.01 00:11:09.182 lat (usec): min=195, max=500, avg=292.46, stdev=59.12 00:11:09.182 clat percentiles (usec): 00:11:09.182 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 215], 00:11:09.182 | 30.00th=[ 233], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 285], 00:11:09.182 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 379], 00:11:09.182 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 461], 99.95th=[ 478], 00:11:09.182 | 99.99th=[ 478] 00:11:09.182 bw ( KiB/s): min= 5456, max= 6832, per=27.47%, avg=6144.00, stdev=972.98, samples=2 00:11:09.182 iops : min= 1364, max= 1708, avg=1536.00, stdev=243.24, samples=2 00:11:09.182 lat (usec) : 250=22.99%, 500=67.49%, 750=9.45% 00:11:09.182 lat (msec) : 50=0.08% 00:11:09.182 cpu : usr=2.73%, sys=5.36%, ctx=2565, majf=0, minf=1 00:11:09.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.182 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.182 job3: (groupid=0, jobs=1): err= 0: pid=2906076: Sun Oct 13 19:40:58 2024 00:11:09.182 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:09.182 slat (nsec): min=5043, max=54936, avg=11942.63, stdev=6133.18 00:11:09.182 clat (usec): min=242, max=1459, avg=345.12, stdev=85.01 00:11:09.182 lat (usec): min=251, max=1473, avg=357.07, stdev=87.13 00:11:09.182 clat percentiles (usec): 00:11:09.182 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:11:09.182 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 375], 00:11:09.182 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 469], 95.00th=[ 494], 00:11:09.182 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 1004], 99.95th=[ 1467], 00:11:09.182 | 99.99th=[ 1467] 00:11:09.182 write: IOPS=1650, BW=6601KiB/s (6760kB/s)(6608KiB/1001msec); 0 zone resets 00:11:09.182 slat (nsec): min=7134, max=71489, avg=19862.09, stdev=8645.93 00:11:09.182 clat (usec): min=174, max=469, avg=244.97, stdev=51.35 00:11:09.182 lat (usec): min=184, max=490, avg=264.84, stdev=51.50 00:11:09.182 clat percentiles (usec): 00:11:09.182 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 212], 00:11:09.182 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:11:09.182 | 70.00th=[ 245], 80.00th=[ 273], 90.00th=[ 326], 95.00th=[ 371], 00:11:09.182 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 465], 99.95th=[ 469], 00:11:09.182 | 99.99th=[ 469] 00:11:09.182 bw ( KiB/s): min= 8032, max= 8032, per=35.91%, avg=8032.00, stdev= 0.00, samples=1 00:11:09.182 iops : min= 2008, max= 2008, avg=2008.00, stdev= 0.00, samples=1 00:11:09.182 lat (usec) : 250=37.70%, 500=60.45%, 750=1.73%, 1000=0.09% 00:11:09.182 lat (msec) : 2=0.03% 00:11:09.182 cpu : usr=3.80%, sys=6.10%, ctx=3189, majf=0, minf=1 00:11:09.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.182 issued rwts: total=1536,1652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.182 00:11:09.182 Run status group 0 (all jobs): 00:11:09.182 READ: bw=16.8MiB/s (17.6MB/s), 3209KiB/s-6138KiB/s (3286kB/s-6285kB/s), io=17.3MiB (18.1MB), run=1001-1028msec 00:11:09.182 WRITE: bw=21.8MiB/s (22.9MB/s), 3992KiB/s-6601KiB/s (4088kB/s-6760kB/s), io=22.5MiB (23.5MB), run=1001-1028msec 00:11:09.182 00:11:09.182 Disk stats (read/write): 00:11:09.182 nvme0n1: ios=868/1024, merge=0/0, ticks=551/235, in_queue=786, util=86.97% 00:11:09.182 nvme0n2: ios=1066/1428, merge=0/0, ticks=739/333, in_queue=1072, util=98.48% 00:11:09.182 nvme0n3: ios=1064/1132, merge=0/0, ticks=928/312, in_queue=1240, util=96.35% 00:11:09.182 nvme0n4: ios=1196/1536, merge=0/0, ticks=708/344, in_queue=1052, util=98.21% 00:11:09.182 19:40:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:09.182 [global] 00:11:09.182 thread=1 00:11:09.182 invalidate=1 00:11:09.182 rw=write 00:11:09.182 time_based=1 00:11:09.182 runtime=1 00:11:09.182 ioengine=libaio 00:11:09.182 direct=1 00:11:09.182 bs=4096 00:11:09.182 iodepth=128 00:11:09.182 norandommap=0 00:11:09.182 numjobs=1 00:11:09.182 00:11:09.182 verify_dump=1 00:11:09.182 verify_backlog=512 00:11:09.182 verify_state_save=0 00:11:09.182 do_verify=1 00:11:09.182 verify=crc32c-intel 00:11:09.182 [job0] 00:11:09.182 filename=/dev/nvme0n1 00:11:09.182 [job1] 00:11:09.182 filename=/dev/nvme0n2 00:11:09.182 [job2] 00:11:09.182 filename=/dev/nvme0n3 00:11:09.182 [job3] 00:11:09.182 filename=/dev/nvme0n4 00:11:09.182 Could not set queue depth (nvme0n1) 00:11:09.182 Could not set queue depth (nvme0n2) 00:11:09.182 Could not set queue depth (nvme0n3) 00:11:09.182 Could not set queue depth (nvme0n4) 00:11:09.182 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.182 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.182 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.182 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.182 fio-3.35 00:11:09.182 Starting 4 threads 00:11:10.561 00:11:10.561 job0: (groupid=0, jobs=1): err= 0: pid=2906302: Sun Oct 13 19:41:00 2024 00:11:10.561 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:11:10.561 slat (nsec): min=1871, max=22182k, avg=124002.55, stdev=906622.39 00:11:10.561 clat (usec): min=5024, max=41940, avg=16514.13, stdev=6440.20 00:11:10.561 lat (usec): min=5031, max=41945, avg=16638.14, stdev=6478.17 00:11:10.561 clat percentiles (usec): 00:11:10.561 | 1.00th=[ 6980], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:11:10.561 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13698], 60.00th=[14746], 00:11:10.561 | 70.00th=[16712], 80.00th=[20317], 90.00th=[26346], 95.00th=[31065], 00:11:10.561 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:10.561 | 99.99th=[41681] 00:11:10.561 write: IOPS=4395, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1009msec); 0 zone resets 00:11:10.561 slat (usec): min=2, max=14308, avg=98.06, stdev=635.02 00:11:10.561 clat (usec): min=1186, max=34189, avg=13625.96, stdev=3539.72 00:11:10.561 lat (usec): min=1192, max=34192, avg=13724.02, stdev=3590.57 00:11:10.561 clat percentiles (usec): 00:11:10.561 | 1.00th=[ 3621], 5.00th=[ 7177], 10.00th=[10421], 20.00th=[11994], 00:11:10.561 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13829], 60.00th=[14353], 00:11:10.561 | 70.00th=[14877], 80.00th=[15401], 90.00th=[16450], 95.00th=[17171], 00:11:10.561 | 99.00th=[28967], 99.50th=[31065], 99.90th=[34341], 99.95th=[34341], 00:11:10.561 | 99.99th=[34341] 00:11:10.561 bw ( KiB/s): min=17176, max=17280, per=33.46%, avg=17228.00, stdev=73.54, samples=2 00:11:10.561 iops : min= 4294, max= 4320, avg=4307.00, stdev=18.38, samples=2 00:11:10.561 lat (msec) : 2=0.18%, 4=0.72%, 10=4.83%, 20=82.84%, 50=11.44% 00:11:10.561 cpu : usr=4.76%, sys=8.04%, ctx=420, majf=0, minf=1 00:11:10.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:10.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.562 issued rwts: total=4096,4435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.562 job1: (groupid=0, jobs=1): err= 0: pid=2906303: Sun Oct 13 19:41:00 2024 00:11:10.562 read: IOPS=1618, BW=6474KiB/s (6629kB/s)(6532KiB/1009msec) 00:11:10.562 slat (usec): min=2, max=43714, avg=170.10, stdev=1401.41 00:11:10.562 clat (usec): min=1498, max=91468, avg=23868.72, stdev=18968.10 00:11:10.562 lat (usec): min=10915, max=91474, avg=24038.82, stdev=18986.18 00:11:10.562 clat percentiles (usec): 00:11:10.562 | 1.00th=[10945], 5.00th=[11600], 10.00th=[13435], 20.00th=[14091], 00:11:10.562 | 30.00th=[14222], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:11:10.562 | 70.00th=[15795], 80.00th=[32637], 90.00th=[55313], 95.00th=[70779], 00:11:10.562 | 99.00th=[77071], 99.50th=[87557], 99.90th=[91751], 99.95th=[91751], 00:11:10.562 | 99.99th=[91751] 00:11:10.562 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:11:10.562 slat (usec): min=2, max=57599, avg=349.59, stdev=2704.94 00:11:10.562 clat (msec): min=9, max=176, avg=40.71, stdev=34.74 00:11:10.562 lat (msec): min=10, max=176, avg=41.06, stdev=34.95 00:11:10.562 clat percentiles (msec): 00:11:10.562 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:11:10.562 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 23], 60.00th=[ 43], 00:11:10.562 | 70.00th=[ 54], 80.00th=[ 68], 90.00th=[ 83], 95.00th=[ 97], 00:11:10.562 | 99.00th=[ 178], 99.50th=[ 178], 99.90th=[ 178], 99.95th=[ 178], 00:11:10.562 | 99.99th=[ 178] 00:11:10.562 bw ( KiB/s): min= 7880, max= 8272, per=15.69%, avg=8076.00, stdev=277.19, samples=2 00:11:10.562 iops : min= 1970, max= 2068, avg=2019.00, stdev=69.30, samples=2 00:11:10.562 lat (msec) : 2=0.03%, 10=0.05%, 20=61.04%, 50=13.37%, 100=22.90% 00:11:10.562 lat (msec) : 250=2.61% 00:11:10.562 cpu : usr=1.69%, sys=2.38%, ctx=203, majf=0, minf=1 00:11:10.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:11:10.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.562 issued rwts: total=1633,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.562 job2: (groupid=0, jobs=1): err= 0: pid=2906323: Sun Oct 13 19:41:00 2024 00:11:10.562 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:11:10.562 slat (usec): min=2, max=18580, avg=153.19, stdev=1040.54 00:11:10.562 clat (usec): min=7294, max=48731, avg=19609.12, stdev=6051.07 00:11:10.562 lat (usec): min=7303, max=48753, avg=19762.31, stdev=6130.40 00:11:10.562 clat percentiles (usec): 00:11:10.562 | 1.00th=[ 7701], 5.00th=[13173], 10.00th=[13698], 20.00th=[15008], 00:11:10.562 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17695], 60.00th=[18482], 00:11:10.562 | 70.00th=[19792], 80.00th=[25560], 90.00th=[29492], 95.00th=[30278], 00:11:10.562 | 99.00th=[36439], 99.50th=[36439], 99.90th=[41157], 99.95th=[46924], 00:11:10.562 | 99.99th=[48497] 00:11:10.562 write: IOPS=3420, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1007msec); 0 zone resets 00:11:10.562 slat (usec): min=3, max=24909, avg=135.55, stdev=1037.31 00:11:10.562 clat (usec): min=706, max=68502, avg=19578.52, stdev=8714.00 00:11:10.562 lat (usec): min=1468, max=68508, avg=19714.07, stdev=8796.37 00:11:10.562 clat percentiles (usec): 00:11:10.562 | 1.00th=[ 4817], 5.00th=[ 7177], 10.00th=[10945], 20.00th=[13042], 00:11:10.562 | 30.00th=[14746], 40.00th=[16057], 50.00th=[17433], 60.00th=[18482], 00:11:10.562 | 70.00th=[22938], 80.00th=[28443], 90.00th=[30540], 95.00th=[31589], 00:11:10.562 | 99.00th=[53740], 99.50th=[59507], 99.90th=[66323], 99.95th=[66323], 00:11:10.562 | 99.99th=[68682] 00:11:10.562 bw ( KiB/s): min=11216, max=15312, per=25.76%, avg=13264.00, stdev=2896.31, samples=2 00:11:10.562 iops : min= 2804, max= 3828, avg=3316.00, stdev=724.08, samples=2 00:11:10.562 lat (usec) : 750=0.02% 00:11:10.562 lat (msec) : 2=0.12%, 4=0.03%, 10=5.72%, 20=61.86%, 50=31.61% 00:11:10.562 lat (msec) : 100=0.63% 00:11:10.562 cpu : usr=3.88%, sys=6.66%, ctx=313, majf=0, minf=1 00:11:10.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:10.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.562 issued rwts: total=3072,3444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.562 job3: (groupid=0, jobs=1): err= 0: pid=2906329: Sun Oct 13 19:41:00 2024 00:11:10.562 read: IOPS=2998, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1010msec) 00:11:10.562 slat (usec): min=2, max=21657, avg=179.05, stdev=1376.95 00:11:10.562 clat (usec): min=1445, max=84854, avg=22595.36, stdev=11287.13 00:11:10.562 lat (usec): min=6134, max=86291, avg=22774.41, stdev=11389.74 00:11:10.562 clat percentiles (usec): 00:11:10.562 | 1.00th=[ 8586], 5.00th=[11731], 10.00th=[13698], 20.00th=[14746], 00:11:10.562 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17695], 60.00th=[21365], 00:11:10.562 | 70.00th=[25560], 80.00th=[29230], 90.00th=[33162], 95.00th=[48497], 00:11:10.562 | 99.00th=[67634], 99.50th=[69731], 99.90th=[70779], 99.95th=[73925], 00:11:10.562 | 99.99th=[84411] 00:11:10.562 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:11:10.562 slat (usec): min=3, max=27657, avg=136.53, stdev=1188.34 00:11:10.562 clat (usec): min=1914, max=57453, avg=19388.45, stdev=8056.58 00:11:10.562 lat (usec): min=1922, max=57469, avg=19524.98, stdev=8175.03 00:11:10.562 clat percentiles (usec): 00:11:10.562 | 1.00th=[ 4228], 5.00th=[ 9372], 10.00th=[11731], 20.00th=[13698], 00:11:10.562 | 30.00th=[15008], 40.00th=[15401], 50.00th=[16712], 60.00th=[17433], 00:11:10.562 | 70.00th=[22938], 80.00th=[28181], 90.00th=[30278], 95.00th=[32637], 00:11:10.562 | 99.00th=[46400], 99.50th=[50070], 99.90th=[52167], 99.95th=[54789], 00:11:10.562 | 99.99th=[57410] 00:11:10.562 bw ( KiB/s): min=12288, max=12288, per=23.87%, avg=12288.00, stdev= 0.00, samples=2 00:11:10.562 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:10.562 lat (msec) : 2=0.18%, 4=0.30%, 10=3.00%, 20=57.38%, 50=36.72% 00:11:10.562 lat (msec) : 100=2.43% 00:11:10.562 cpu : usr=3.07%, sys=5.45%, ctx=238, majf=0, minf=1 00:11:10.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:10.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.562 issued rwts: total=3028,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.562 00:11:10.562 Run status group 0 (all jobs): 00:11:10.562 READ: bw=45.7MiB/s (48.0MB/s), 6474KiB/s-15.9MiB/s (6629kB/s-16.6MB/s), io=46.2MiB (48.5MB), run=1007-1010msec 00:11:10.562 WRITE: bw=50.3MiB/s (52.7MB/s), 8119KiB/s-17.2MiB/s (8314kB/s-18.0MB/s), io=50.8MiB (53.2MB), run=1007-1010msec 00:11:10.562 00:11:10.562 Disk stats (read/write): 00:11:10.562 nvme0n1: ios=3634/3815, merge=0/0, ticks=41333/41666, in_queue=82999, util=90.68% 00:11:10.562 nvme0n2: ios=1575/1703, merge=0/0, ticks=7572/20021, in_queue=27593, util=88.73% 00:11:10.562 nvme0n3: ios=2612/2630, merge=0/0, ticks=30015/43551, in_queue=73566, util=100.00% 00:11:10.562 nvme0n4: ios=2608/2850, merge=0/0, ticks=46675/46784, in_queue=93459, util=99.58% 00:11:10.562 19:41:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:10.562 [global] 00:11:10.562 thread=1 00:11:10.562 invalidate=1 00:11:10.562 rw=randwrite 00:11:10.562 time_based=1 00:11:10.562 runtime=1 00:11:10.562 ioengine=libaio 00:11:10.562 direct=1 00:11:10.562 bs=4096 00:11:10.562 iodepth=128 00:11:10.562 norandommap=0 00:11:10.562 numjobs=1 00:11:10.562 00:11:10.562 verify_dump=1 00:11:10.562 verify_backlog=512 00:11:10.562 verify_state_save=0 00:11:10.562 do_verify=1 00:11:10.562 verify=crc32c-intel 00:11:10.562 [job0] 00:11:10.562 filename=/dev/nvme0n1 00:11:10.562 [job1] 00:11:10.562 filename=/dev/nvme0n2 00:11:10.562 [job2] 00:11:10.562 filename=/dev/nvme0n3 00:11:10.562 [job3] 00:11:10.562 filename=/dev/nvme0n4 00:11:10.562 Could not set queue depth (nvme0n1) 00:11:10.562 Could not set queue depth (nvme0n2) 00:11:10.562 Could not set queue depth (nvme0n3) 00:11:10.562 Could not set queue depth (nvme0n4) 00:11:10.822 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.822 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.822 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.822 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.822 fio-3.35 00:11:10.822 Starting 4 threads 00:11:12.196 00:11:12.196 job0: (groupid=0, jobs=1): err= 0: pid=2906653: Sun Oct 13 19:41:01 2024 00:11:12.196 read: IOPS=2172, BW=8691KiB/s (8899kB/s)(8708KiB/1002msec) 00:11:12.196 slat (usec): min=3, max=37216, avg=228.88, stdev=1697.24 00:11:12.196 clat (usec): min=554, max=107533, avg=29534.73, stdev=24620.81 00:11:12.196 lat (msec): min=3, max=107, avg=29.76, stdev=24.75 00:11:12.196 clat percentiles (msec): 00:11:12.196 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:11:12.196 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 26], 00:11:12.196 | 70.00th=[ 30], 80.00th=[ 52], 90.00th=[ 69], 95.00th=[ 80], 00:11:12.196 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:11:12.196 | 99.99th=[ 108] 00:11:12.196 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:11:12.196 slat (usec): min=3, max=12780, avg=188.71, stdev=869.18 00:11:12.196 clat (usec): min=10040, max=90764, avg=24155.23, stdev=15208.33 00:11:12.196 lat (usec): min=10204, max=90771, avg=24343.94, stdev=15285.63 00:11:12.196 clat percentiles (usec): 00:11:12.196 | 1.00th=[10421], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:11:12.196 | 30.00th=[13435], 40.00th=[14615], 50.00th=[22676], 60.00th=[25560], 00:11:12.196 | 70.00th=[27657], 80.00th=[30540], 90.00th=[34341], 95.00th=[42206], 00:11:12.196 | 99.00th=[90702], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:11:12.196 | 99.99th=[90702] 00:11:12.196 bw ( KiB/s): min= 8192, max=12288, per=21.06%, avg=10240.00, stdev=2896.31, samples=2 00:11:12.196 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:12.196 lat (usec) : 750=0.02% 00:11:12.196 lat (msec) : 4=0.68%, 10=0.68%, 20=48.15%, 50=38.74%, 100=10.43% 00:11:12.196 lat (msec) : 250=1.31% 00:11:12.196 cpu : usr=2.50%, sys=3.50%, ctx=277, majf=0, minf=1 00:11:12.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:12.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.196 issued rwts: total=2177,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.196 job1: (groupid=0, jobs=1): err= 0: pid=2906654: Sun Oct 13 19:41:01 2024 00:11:12.196 read: IOPS=3743, BW=14.6MiB/s (15.3MB/s)(14.8MiB/1010msec) 00:11:12.196 slat (usec): min=2, max=13971, avg=104.90, stdev=853.69 00:11:12.196 clat (usec): min=4019, max=33865, avg=15657.49, stdev=3436.42 00:11:12.196 lat (usec): min=9373, max=33877, avg=15762.39, stdev=3525.71 00:11:12.196 clat percentiles (usec): 00:11:12.196 | 1.00th=[ 9634], 5.00th=[11207], 10.00th=[11731], 20.00th=[13173], 00:11:12.196 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15270], 60.00th=[16188], 00:11:12.196 | 70.00th=[16909], 80.00th=[17171], 90.00th=[18220], 95.00th=[22938], 00:11:12.196 | 99.00th=[29754], 99.50th=[31327], 99.90th=[32900], 99.95th=[33817], 00:11:12.196 | 99.99th=[33817] 00:11:12.196 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:11:12.196 slat (usec): min=3, max=12901, avg=101.14, stdev=760.30 00:11:12.196 clat (usec): min=949, max=78474, avg=16885.74, stdev=14586.38 00:11:12.196 lat (usec): min=990, max=78481, avg=16986.88, stdev=14678.83 00:11:12.196 clat percentiles (usec): 00:11:12.196 | 1.00th=[ 2835], 5.00th=[ 5669], 10.00th=[ 7046], 20.00th=[ 8717], 00:11:12.196 | 30.00th=[10159], 40.00th=[11600], 50.00th=[12649], 60.00th=[14222], 00:11:12.196 | 70.00th=[15401], 80.00th=[17433], 90.00th=[33817], 95.00th=[57934], 00:11:12.196 | 99.00th=[74974], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:11:12.196 | 99.99th=[78119] 00:11:12.196 bw ( KiB/s): min=16336, max=16432, per=33.70%, avg=16384.00, stdev=67.88, samples=2 00:11:12.196 iops : min= 4084, max= 4108, avg=4096.00, stdev=16.97, samples=2 00:11:12.196 lat (usec) : 1000=0.01% 00:11:12.196 lat (msec) : 2=0.13%, 4=0.85%, 10=14.95%, 20=71.91%, 50=9.01% 00:11:12.196 lat (msec) : 100=3.14% 00:11:12.196 cpu : usr=2.87%, sys=5.15%, ctx=268, majf=0, minf=1 00:11:12.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:12.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.197 issued rwts: total=3781,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.197 job2: (groupid=0, jobs=1): err= 0: pid=2906657: Sun Oct 13 19:41:01 2024 00:11:12.197 read: IOPS=1718, BW=6872KiB/s (7037kB/s)(6948KiB/1011msec) 00:11:12.197 slat (usec): min=3, max=25017, avg=312.53, stdev=1856.49 00:11:12.197 clat (msec): min=2, max=100, avg=37.64, stdev=23.00 00:11:12.197 lat (msec): min=10, max=100, avg=37.95, stdev=23.16 00:11:12.197 clat percentiles (msec): 00:11:12.197 | 1.00th=[ 12], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 20], 00:11:12.197 | 30.00th=[ 20], 40.00th=[ 24], 50.00th=[ 29], 60.00th=[ 33], 00:11:12.197 | 70.00th=[ 50], 80.00th=[ 64], 90.00th=[ 78], 95.00th=[ 83], 00:11:12.197 | 99.00th=[ 89], 99.50th=[ 89], 99.90th=[ 96], 99.95th=[ 101], 00:11:12.197 | 99.99th=[ 101] 00:11:12.197 write: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec); 0 zone resets 00:11:12.197 slat (usec): min=4, max=13723, avg=214.98, stdev=1045.57 00:11:12.197 clat (usec): min=10703, max=90765, avg=30159.87, stdev=16166.92 00:11:12.197 lat (usec): min=10713, max=90780, avg=30374.86, stdev=16256.46 00:11:12.197 clat percentiles (usec): 00:11:12.197 | 1.00th=[13698], 5.00th=[13960], 10.00th=[15139], 20.00th=[22152], 00:11:12.197 | 30.00th=[23725], 40.00th=[24773], 50.00th=[25822], 60.00th=[26346], 00:11:12.197 | 70.00th=[28443], 80.00th=[33817], 90.00th=[49546], 95.00th=[72877], 00:11:12.197 | 99.00th=[90702], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:11:12.197 | 99.99th=[90702] 00:11:12.197 bw ( KiB/s): min= 8192, max= 8192, per=16.85%, avg=8192.00, stdev= 0.00, samples=2 00:11:12.197 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:12.197 lat (msec) : 4=0.03%, 20=21.74%, 50=59.84%, 100=18.36%, 250=0.03% 00:11:12.197 cpu : usr=3.37%, sys=4.55%, ctx=227, majf=0, minf=1 00:11:12.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:11:12.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.197 issued rwts: total=1737,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.197 job3: (groupid=0, jobs=1): err= 0: pid=2906658: Sun Oct 13 19:41:01 2024 00:11:12.197 read: IOPS=3557, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1003msec) 00:11:12.197 slat (usec): min=3, max=16062, avg=139.75, stdev=853.45 00:11:12.197 clat (usec): min=1907, max=76542, avg=19117.54, stdev=9023.90 00:11:12.197 lat (usec): min=6773, max=92218, avg=19257.30, stdev=9107.70 00:11:12.197 clat percentiles (usec): 00:11:12.197 | 1.00th=[ 7046], 5.00th=[12125], 10.00th=[12780], 20.00th=[13435], 00:11:12.197 | 30.00th=[14353], 40.00th=[15401], 50.00th=[15664], 60.00th=[16712], 00:11:12.197 | 70.00th=[19006], 80.00th=[22414], 90.00th=[33424], 95.00th=[35914], 00:11:12.197 | 99.00th=[52691], 99.50th=[65274], 99.90th=[76022], 99.95th=[76022], 00:11:12.197 | 99.99th=[76022] 00:11:12.197 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:11:12.197 slat (usec): min=4, max=18586, avg=128.99, stdev=811.81 00:11:12.197 clat (usec): min=7398, max=79663, avg=16343.59, stdev=9107.30 00:11:12.197 lat (usec): min=7414, max=80424, avg=16472.59, stdev=9182.65 00:11:12.197 clat percentiles (usec): 00:11:12.197 | 1.00th=[ 9110], 5.00th=[12256], 10.00th=[12387], 20.00th=[12780], 00:11:12.197 | 30.00th=[13173], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:11:12.197 | 70.00th=[15401], 80.00th=[16319], 90.00th=[19006], 95.00th=[24249], 00:11:12.197 | 99.00th=[77071], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:11:12.197 | 99.99th=[79168] 00:11:12.197 bw ( KiB/s): min=12288, max=16416, per=29.52%, avg=14352.00, stdev=2918.94, samples=2 00:11:12.197 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:11:12.197 lat (msec) : 2=0.01%, 10=1.62%, 20=82.48%, 50=14.11%, 100=1.78% 00:11:12.197 cpu : usr=5.49%, sys=7.19%, ctx=269, majf=0, minf=1 00:11:12.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:12.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.197 issued rwts: total=3568,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.197 00:11:12.197 Run status group 0 (all jobs): 00:11:12.197 READ: bw=43.5MiB/s (45.6MB/s), 6872KiB/s-14.6MiB/s (7037kB/s-15.3MB/s), io=44.0MiB (46.1MB), run=1002-1011msec 00:11:12.197 WRITE: bw=47.5MiB/s (49.8MB/s), 8103KiB/s-15.8MiB/s (8297kB/s-16.6MB/s), io=48.0MiB (50.3MB), run=1002-1011msec 00:11:12.197 00:11:12.197 Disk stats (read/write): 00:11:12.197 nvme0n1: ios=1588/1761, merge=0/0, ticks=13973/13404, in_queue=27377, util=98.00% 00:11:12.197 nvme0n2: ios=3072/3431, merge=0/0, ticks=46746/52667, in_queue=99413, util=86.59% 00:11:12.197 nvme0n3: ios=1569/1870, merge=0/0, ticks=18725/16723, in_queue=35448, util=97.70% 00:11:12.197 nvme0n4: ios=2875/3072, merge=0/0, ticks=26241/24273, in_queue=50514, util=89.68% 00:11:12.197 19:41:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:12.197 19:41:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2906796 00:11:12.197 19:41:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:12.197 19:41:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:12.197 [global] 00:11:12.197 thread=1 00:11:12.197 invalidate=1 00:11:12.197 rw=read 00:11:12.197 time_based=1 00:11:12.197 runtime=10 00:11:12.197 ioengine=libaio 00:11:12.197 direct=1 00:11:12.197 bs=4096 00:11:12.197 iodepth=1 00:11:12.197 norandommap=1 00:11:12.197 numjobs=1 00:11:12.197 00:11:12.197 [job0] 00:11:12.197 filename=/dev/nvme0n1 00:11:12.197 [job1] 00:11:12.197 filename=/dev/nvme0n2 00:11:12.197 [job2] 00:11:12.197 filename=/dev/nvme0n3 00:11:12.197 [job3] 00:11:12.197 filename=/dev/nvme0n4 00:11:12.197 Could not set queue depth (nvme0n1) 00:11:12.197 Could not set queue depth (nvme0n2) 00:11:12.197 Could not set queue depth (nvme0n3) 00:11:12.197 Could not set queue depth (nvme0n4) 00:11:12.197 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.197 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.197 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.197 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.197 fio-3.35 00:11:12.197 Starting 4 threads 00:11:15.478 19:41:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:15.478 19:41:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:15.478 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=303104, buflen=4096 00:11:15.478 fio: pid=2906890, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.478 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.478 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:15.478 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=19161088, buflen=4096 00:11:15.478 fio: pid=2906889, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:16.043 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1810432, buflen=4096 00:11:16.044 fio: pid=2906887, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:16.044 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.044 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:16.302 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1351680, buflen=4096 00:11:16.302 fio: pid=2906888, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:16.302 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.302 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:16.302 00:11:16.302 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2906887: Sun Oct 13 19:41:05 2024 00:11:16.302 read: IOPS=124, BW=496KiB/s (508kB/s)(1768KiB/3561msec) 00:11:16.302 slat (usec): min=6, max=8949, avg=52.89, stdev=483.77 00:11:16.302 clat (usec): min=270, max=42344, avg=7944.26, stdev=15870.00 00:11:16.302 lat (usec): min=286, max=50035, avg=7997.23, stdev=15942.12 00:11:16.302 clat percentiles (usec): 00:11:16.302 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 297], 00:11:16.302 | 30.00th=[ 310], 40.00th=[ 338], 50.00th=[ 375], 60.00th=[ 433], 00:11:16.302 | 70.00th=[ 515], 80.00th=[ 701], 90.00th=[41157], 95.00th=[41157], 00:11:16.302 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.302 | 99.99th=[42206] 00:11:16.302 bw ( KiB/s): min= 200, max= 1840, per=10.10%, avg=572.00, stdev=637.01, samples=6 00:11:16.302 iops : min= 50, max= 460, avg=143.00, stdev=159.25, samples=6 00:11:16.302 lat (usec) : 500=68.40%, 750=12.87% 00:11:16.302 lat (msec) : 50=18.51% 00:11:16.302 cpu : usr=0.08%, sys=0.48%, ctx=448, majf=0, minf=1 00:11:16.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.302 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.302 issued rwts: total=443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.302 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2906888: Sun Oct 13 19:41:05 2024 00:11:16.302 read: IOPS=84, BW=338KiB/s (346kB/s)(1320KiB/3903msec) 00:11:16.302 slat (usec): min=5, max=14894, avg=103.55, stdev=1013.76 00:11:16.302 clat (usec): min=213, max=42381, avg=11649.55, stdev=18356.84 00:11:16.302 lat (usec): min=219, max=57002, avg=11753.25, stdev=18436.92 00:11:16.302 clat percentiles (usec): 00:11:16.302 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 233], 00:11:16.302 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 273], 00:11:16.302 | 70.00th=[ 302], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:11:16.302 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.302 | 99.99th=[42206] 00:11:16.302 bw ( KiB/s): min= 96, max= 737, per=3.32%, avg=188.71, stdev=241.79, samples=7 00:11:16.302 iops : min= 24, max= 184, avg=47.14, stdev=60.35, samples=7 00:11:16.302 lat (usec) : 250=39.27%, 500=32.02%, 750=0.30% 00:11:16.302 lat (msec) : 10=0.30%, 50=27.79% 00:11:16.302 cpu : usr=0.10%, sys=0.10%, ctx=336, majf=0, minf=1 00:11:16.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.302 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.302 issued rwts: total=331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.302 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2906889: Sun Oct 13 19:41:05 2024 00:11:16.302 read: IOPS=1437, BW=5747KiB/s (5885kB/s)(18.3MiB/3256msec) 00:11:16.302 slat (usec): min=4, max=12593, avg=21.17, stdev=220.43 00:11:16.302 clat (usec): min=257, max=41458, avg=665.80, stdev=3661.51 00:11:16.302 lat (usec): min=264, max=41491, avg=686.97, stdev=3667.96 00:11:16.302 clat percentiles (usec): 00:11:16.302 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 293], 00:11:16.302 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:11:16.302 | 70.00th=[ 347], 80.00th=[ 383], 90.00th=[ 400], 95.00th=[ 412], 00:11:16.302 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:11:16.302 | 99.99th=[41681] 00:11:16.302 bw ( KiB/s): min= 208, max=11672, per=96.41%, avg=5458.67, stdev=5077.33, samples=6 00:11:16.302 iops : min= 52, max= 2918, avg=1364.67, stdev=1269.33, samples=6 00:11:16.302 lat (usec) : 500=98.70%, 750=0.43% 00:11:16.302 lat (msec) : 10=0.02%, 20=0.02%, 50=0.81% 00:11:16.302 cpu : usr=1.23%, sys=2.70%, ctx=4681, majf=0, minf=2 00:11:16.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.302 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.302 issued rwts: total=4679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.302 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2906890: Sun Oct 13 19:41:05 2024 00:11:16.302 read: IOPS=25, BW=99.5KiB/s (102kB/s)(296KiB/2976msec) 00:11:16.302 slat (nsec): min=11526, max=35124, avg=23406.53, stdev=9260.55 00:11:16.302 clat (usec): min=322, max=41066, avg=39868.07, stdev=6608.93 00:11:16.302 lat (usec): min=340, max=41077, avg=39891.59, stdev=6608.38 00:11:16.302 clat percentiles (usec): 00:11:16.302 | 1.00th=[ 322], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:16.302 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:16.302 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:16.302 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:16.302 | 99.99th=[41157] 00:11:16.302 bw ( KiB/s): min= 96, max= 104, per=1.75%, avg=99.20, stdev= 4.38, samples=5 00:11:16.302 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:11:16.302 lat (usec) : 500=1.33%, 750=1.33% 00:11:16.302 lat (msec) : 50=96.00% 00:11:16.302 cpu : usr=0.07%, sys=0.00%, ctx=76, majf=0, minf=2 00:11:16.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.302 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.302 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.302 00:11:16.302 Run status group 0 (all jobs): 00:11:16.302 READ: bw=5661KiB/s (5797kB/s), 99.5KiB/s-5747KiB/s (102kB/s-5885kB/s), io=21.6MiB (22.6MB), run=2976-3903msec 00:11:16.303 00:11:16.303 Disk stats (read/write): 00:11:16.303 nvme0n1: ios=477/0, merge=0/0, ticks=4343/0, in_queue=4343, util=99.17% 00:11:16.303 nvme0n2: ios=329/0, merge=0/0, ticks=3804/0, in_queue=3804, util=96.13% 00:11:16.303 nvme0n3: ios=4329/0, merge=0/0, ticks=2929/0, in_queue=2929, util=96.17% 00:11:16.303 nvme0n4: ios=71/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.75% 00:11:16.561 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.561 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:16.819 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.819 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:17.076 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.077 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:17.641 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.641 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:17.899 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:17.899 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2906796 00:11:17.899 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:17.899 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:18.831 nvmf hotplug test: fio failed as expected 00:11:18.831 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.089 rmmod nvme_tcp 00:11:19.089 rmmod nvme_fabrics 00:11:19.089 rmmod nvme_keyring 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2904631 ']' 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2904631 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2904631 ']' 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2904631 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2904631 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2904631' 00:11:19.089 killing process with pid 2904631 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2904631 00:11:19.089 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2904631 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.519 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.423 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.423 00:11:22.423 real 0m26.978s 00:11:22.423 user 1m35.041s 00:11:22.423 sys 0m6.822s 00:11:22.423 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.423 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.423 ************************************ 00:11:22.423 END TEST nvmf_fio_target 00:11:22.423 ************************************ 00:11:22.423 19:41:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:22.423 19:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.423 19:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.423 19:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.423 ************************************ 00:11:22.423 START TEST nvmf_bdevio 00:11:22.423 ************************************ 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:22.423 * Looking for test storage... 00:11:22.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:22.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.423 --rc genhtml_branch_coverage=1 00:11:22.423 --rc genhtml_function_coverage=1 00:11:22.423 --rc genhtml_legend=1 00:11:22.423 --rc geninfo_all_blocks=1 00:11:22.423 --rc geninfo_unexecuted_blocks=1 00:11:22.423 00:11:22.423 ' 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:22.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.423 --rc genhtml_branch_coverage=1 00:11:22.423 --rc genhtml_function_coverage=1 00:11:22.423 --rc genhtml_legend=1 00:11:22.423 --rc geninfo_all_blocks=1 00:11:22.423 --rc geninfo_unexecuted_blocks=1 00:11:22.423 00:11:22.423 ' 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:22.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.423 --rc genhtml_branch_coverage=1 00:11:22.423 --rc genhtml_function_coverage=1 00:11:22.423 --rc genhtml_legend=1 00:11:22.423 --rc geninfo_all_blocks=1 00:11:22.423 --rc geninfo_unexecuted_blocks=1 00:11:22.423 00:11:22.423 ' 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:22.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.423 --rc genhtml_branch_coverage=1 00:11:22.423 --rc genhtml_function_coverage=1 00:11:22.423 --rc genhtml_legend=1 00:11:22.423 --rc geninfo_all_blocks=1 00:11:22.423 --rc geninfo_unexecuted_blocks=1 00:11:22.423 00:11:22.423 ' 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.423 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.424 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:24.956 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:24.956 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:24.956 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:24.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:11:24.956 00:11:24.956 --- 10.0.0.2 ping statistics --- 00:11:24.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.956 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:24.956 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:11:24.956 00:11:24.957 --- 10.0.0.1 ping statistics --- 00:11:24.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.957 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2909790 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2909790 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2909790 ']' 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.957 19:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.957 [2024-10-13 19:41:14.441811] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:11:24.957 [2024-10-13 19:41:14.441968] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.957 [2024-10-13 19:41:14.586196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.957 [2024-10-13 19:41:14.732512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.957 [2024-10-13 19:41:14.732607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.957 [2024-10-13 19:41:14.732632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.957 [2024-10-13 19:41:14.732655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.957 [2024-10-13 19:41:14.732674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.957 [2024-10-13 19:41:14.735812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:24.957 [2024-10-13 19:41:14.735872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:24.957 [2024-10-13 19:41:14.736012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.957 [2024-10-13 19:41:14.736017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.891 [2024-10-13 19:41:15.461906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.891 Malloc0 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.891 [2024-10-13 19:41:15.580594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:25.891 { 00:11:25.891 "params": { 00:11:25.891 "name": "Nvme$subsystem", 00:11:25.891 "trtype": "$TEST_TRANSPORT", 00:11:25.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:25.891 "adrfam": "ipv4", 00:11:25.891 "trsvcid": "$NVMF_PORT", 00:11:25.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:25.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:25.891 "hdgst": ${hdgst:-false}, 00:11:25.891 "ddgst": ${ddgst:-false} 00:11:25.891 }, 00:11:25.891 "method": "bdev_nvme_attach_controller" 00:11:25.891 } 00:11:25.891 EOF 00:11:25.891 )") 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:25.891 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:25.891 "params": { 00:11:25.891 "name": "Nvme1", 00:11:25.891 "trtype": "tcp", 00:11:25.891 "traddr": "10.0.0.2", 00:11:25.891 "adrfam": "ipv4", 00:11:25.891 "trsvcid": "4420", 00:11:25.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:25.891 "hdgst": false, 00:11:25.891 "ddgst": false 00:11:25.891 }, 00:11:25.891 "method": "bdev_nvme_attach_controller" 00:11:25.891 }' 00:11:25.891 [2024-10-13 19:41:15.667554] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:11:25.891 [2024-10-13 19:41:15.667703] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909948 ] 00:11:26.150 [2024-10-13 19:41:15.794510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:26.150 [2024-10-13 19:41:15.929889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.150 [2024-10-13 19:41:15.929940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.150 [2024-10-13 19:41:15.929936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.716 I/O targets: 00:11:26.716 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:26.716 00:11:26.716 00:11:26.716 CUnit - A unit testing framework for C - Version 2.1-3 00:11:26.716 http://cunit.sourceforge.net/ 00:11:26.716 00:11:26.716 00:11:26.716 Suite: bdevio tests on: Nvme1n1 00:11:26.973 Test: blockdev write read block ...passed 00:11:26.973 Test: blockdev write zeroes read block ...passed 00:11:26.973 Test: blockdev write zeroes read no split ...passed 00:11:26.973 Test: blockdev write zeroes read split ...passed 00:11:26.973 Test: blockdev write zeroes read split partial ...passed 00:11:26.974 Test: blockdev reset ...[2024-10-13 19:41:16.761239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:26.974 [2024-10-13 19:41:16.761451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:27.232 [2024-10-13 19:41:16.865465] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:27.232 passed 00:11:27.232 Test: blockdev write read 8 blocks ...passed 00:11:27.232 Test: blockdev write read size > 128k ...passed 00:11:27.232 Test: blockdev write read invalid size ...passed 00:11:27.232 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:27.232 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:27.232 Test: blockdev write read max offset ...passed 00:11:27.232 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:27.490 Test: blockdev writev readv 8 blocks ...passed 00:11:27.490 Test: blockdev writev readv 30 x 1block ...passed 00:11:27.490 Test: blockdev writev readv block ...passed 00:11:27.490 Test: blockdev writev readv size > 128k ...passed 00:11:27.490 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:27.490 Test: blockdev comparev and writev ...[2024-10-13 19:41:17.167791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.490 [2024-10-13 19:41:17.167875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.167915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.490 [2024-10-13 19:41:17.167942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.168444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.490 [2024-10-13 19:41:17.168492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.168528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.490 [2024-10-13 19:41:17.168553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.169028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.490 [2024-10-13 19:41:17.169067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.169103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.490 [2024-10-13 19:41:17.169129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.169614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.490 [2024-10-13 19:41:17.169649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.169688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.490 [2024-10-13 19:41:17.169714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:27.490 passed 00:11:27.490 Test: blockdev nvme passthru rw ...passed 00:11:27.490 Test: blockdev nvme passthru vendor specific ...[2024-10-13 19:41:17.253880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.490 [2024-10-13 19:41:17.253942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.254207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.490 [2024-10-13 19:41:17.254251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.254517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.490 [2024-10-13 19:41:17.254555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:27.490 [2024-10-13 19:41:17.254785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.490 [2024-10-13 19:41:17.254818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:27.490 passed 00:11:27.490 Test: blockdev nvme admin passthru ...passed 00:11:27.748 Test: blockdev copy ...passed 00:11:27.748 00:11:27.748 Run Summary: Type Total Ran Passed Failed Inactive 00:11:27.748 suites 1 1 n/a 0 0 00:11:27.748 tests 23 23 23 0 0 00:11:27.748 asserts 152 152 152 0 n/a 00:11:27.748 00:11:27.748 Elapsed time = 1.631 seconds 00:11:28.342 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.342 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.342 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.602 rmmod nvme_tcp 00:11:28.602 rmmod nvme_fabrics 00:11:28.602 rmmod nvme_keyring 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2909790 ']' 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2909790 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2909790 ']' 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2909790 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2909790 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2909790' 00:11:28.602 killing process with pid 2909790 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2909790 00:11:28.602 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2909790 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.976 19:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.878 19:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.878 00:11:31.878 real 0m9.566s 00:11:31.878 user 0m24.008s 00:11:31.878 sys 0m2.478s 00:11:31.878 19:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.878 19:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.878 ************************************ 00:11:31.878 END TEST nvmf_bdevio 00:11:31.878 ************************************ 00:11:31.878 19:41:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:31.878 00:11:31.878 real 4m29.979s 00:11:31.878 user 11m49.253s 00:11:31.878 sys 1m9.507s 00:11:31.878 19:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.878 19:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.878 ************************************ 00:11:31.878 END TEST nvmf_target_core 00:11:31.878 ************************************ 00:11:31.878 19:41:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:31.878 19:41:21 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.878 19:41:21 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.878 19:41:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:31.878 ************************************ 00:11:31.878 START TEST nvmf_target_extra 00:11:31.878 ************************************ 00:11:31.878 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:31.878 * Looking for test storage... 00:11:32.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:32.138 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:32.138 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:32.138 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:32.138 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:32.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.139 --rc genhtml_branch_coverage=1 00:11:32.139 --rc genhtml_function_coverage=1 00:11:32.139 --rc genhtml_legend=1 00:11:32.139 --rc geninfo_all_blocks=1 00:11:32.139 --rc geninfo_unexecuted_blocks=1 00:11:32.139 00:11:32.139 ' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:32.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.139 --rc genhtml_branch_coverage=1 00:11:32.139 --rc genhtml_function_coverage=1 00:11:32.139 --rc genhtml_legend=1 00:11:32.139 --rc geninfo_all_blocks=1 00:11:32.139 --rc geninfo_unexecuted_blocks=1 00:11:32.139 00:11:32.139 ' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:32.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.139 --rc genhtml_branch_coverage=1 00:11:32.139 --rc genhtml_function_coverage=1 00:11:32.139 --rc genhtml_legend=1 00:11:32.139 --rc geninfo_all_blocks=1 00:11:32.139 --rc geninfo_unexecuted_blocks=1 00:11:32.139 00:11:32.139 ' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:32.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.139 --rc genhtml_branch_coverage=1 00:11:32.139 --rc genhtml_function_coverage=1 00:11:32.139 --rc genhtml_legend=1 00:11:32.139 --rc geninfo_all_blocks=1 00:11:32.139 --rc geninfo_unexecuted_blocks=1 00:11:32.139 00:11:32.139 ' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.139 ************************************ 00:11:32.139 START TEST nvmf_example 00:11:32.139 ************************************ 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:32.139 * Looking for test storage... 00:11:32.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.139 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.140 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:32.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.398 --rc genhtml_branch_coverage=1 00:11:32.398 --rc genhtml_function_coverage=1 00:11:32.398 --rc genhtml_legend=1 00:11:32.398 --rc geninfo_all_blocks=1 00:11:32.398 --rc geninfo_unexecuted_blocks=1 00:11:32.398 00:11:32.398 ' 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:32.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.398 --rc genhtml_branch_coverage=1 00:11:32.398 --rc genhtml_function_coverage=1 00:11:32.398 --rc genhtml_legend=1 00:11:32.398 --rc geninfo_all_blocks=1 00:11:32.398 --rc geninfo_unexecuted_blocks=1 00:11:32.398 00:11:32.398 ' 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:32.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.398 --rc genhtml_branch_coverage=1 00:11:32.398 --rc genhtml_function_coverage=1 00:11:32.398 --rc genhtml_legend=1 00:11:32.398 --rc geninfo_all_blocks=1 00:11:32.398 --rc geninfo_unexecuted_blocks=1 00:11:32.398 00:11:32.398 ' 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:32.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.398 --rc genhtml_branch_coverage=1 00:11:32.398 --rc genhtml_function_coverage=1 00:11:32.398 --rc genhtml_legend=1 00:11:32.398 --rc geninfo_all_blocks=1 00:11:32.398 --rc geninfo_unexecuted_blocks=1 00:11:32.398 00:11:32.398 ' 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.398 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.399 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.299 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.300 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.300 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.300 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.300 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.300 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.300 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.300 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:11:34.300 00:11:34.300 --- 10.0.0.2 ping statistics --- 00:11:34.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.300 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:34.300 00:11:34.300 --- 10.0.0.1 ping statistics --- 00:11:34.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.300 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2912350 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2912350 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2912350 ']' 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.300 19:41:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.673 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:35.673 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:35.673 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.673 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.673 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:35.674 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:47.869 Initializing NVMe Controllers 00:11:47.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:47.869 Initialization complete. Launching workers. 00:11:47.869 ======================================================== 00:11:47.869 Latency(us) 00:11:47.869 Device Information : IOPS MiB/s Average min max 00:11:47.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12223.15 47.75 5237.84 1316.50 15543.80 00:11:47.869 ======================================================== 00:11:47.869 Total : 12223.15 47.75 5237.84 1316.50 15543.80 00:11:47.869 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.869 rmmod nvme_tcp 00:11:47.869 rmmod nvme_fabrics 00:11:47.869 rmmod nvme_keyring 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 2912350 ']' 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 2912350 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2912350 ']' 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2912350 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2912350 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:47.869 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:47.870 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2912350' 00:11:47.870 killing process with pid 2912350 00:11:47.870 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2912350 00:11:47.870 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2912350 00:11:47.870 nvmf threads initialize successfully 00:11:47.870 bdev subsystem init successfully 00:11:47.870 created a nvmf target service 00:11:47.870 create targets's poll groups done 00:11:47.870 all subsystems of target started 00:11:47.870 nvmf target is running 00:11:47.870 all subsystems of target stopped 00:11:47.870 destroy targets's poll groups done 00:11:47.870 destroyed the nvmf target service 00:11:47.870 bdev subsystem finish successfully 00:11:47.870 nvmf threads destroy successfully 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.870 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.870 19:41:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.870 19:41:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.870 19:41:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.246 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:49.246 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:49.246 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:49.246 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.508 00:11:49.508 real 0m17.249s 00:11:49.508 user 0m47.801s 00:11:49.508 sys 0m3.621s 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.508 ************************************ 00:11:49.508 END TEST nvmf_example 00:11:49.508 ************************************ 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.508 ************************************ 00:11:49.508 START TEST nvmf_filesystem 00:11:49.508 ************************************ 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:49.508 * Looking for test storage... 00:11:49.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:49.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.508 --rc genhtml_branch_coverage=1 00:11:49.508 --rc genhtml_function_coverage=1 00:11:49.508 --rc genhtml_legend=1 00:11:49.508 --rc geninfo_all_blocks=1 00:11:49.508 --rc geninfo_unexecuted_blocks=1 00:11:49.508 00:11:49.508 ' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:49.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.508 --rc genhtml_branch_coverage=1 00:11:49.508 --rc genhtml_function_coverage=1 00:11:49.508 --rc genhtml_legend=1 00:11:49.508 --rc geninfo_all_blocks=1 00:11:49.508 --rc geninfo_unexecuted_blocks=1 00:11:49.508 00:11:49.508 ' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:49.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.508 --rc genhtml_branch_coverage=1 00:11:49.508 --rc genhtml_function_coverage=1 00:11:49.508 --rc genhtml_legend=1 00:11:49.508 --rc geninfo_all_blocks=1 00:11:49.508 --rc geninfo_unexecuted_blocks=1 00:11:49.508 00:11:49.508 ' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:49.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.508 --rc genhtml_branch_coverage=1 00:11:49.508 --rc genhtml_function_coverage=1 00:11:49.508 --rc genhtml_legend=1 00:11:49.508 --rc geninfo_all_blocks=1 00:11:49.508 --rc geninfo_unexecuted_blocks=1 00:11:49.508 00:11:49.508 ' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:49.508 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:49.509 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:49.510 #define SPDK_CONFIG_H 00:11:49.510 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:49.510 #define SPDK_CONFIG_APPS 1 00:11:49.510 #define SPDK_CONFIG_ARCH native 00:11:49.510 #define SPDK_CONFIG_ASAN 1 00:11:49.510 #undef SPDK_CONFIG_AVAHI 00:11:49.510 #undef SPDK_CONFIG_CET 00:11:49.510 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:49.510 #define SPDK_CONFIG_COVERAGE 1 00:11:49.510 #define SPDK_CONFIG_CROSS_PREFIX 00:11:49.510 #undef SPDK_CONFIG_CRYPTO 00:11:49.510 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:49.510 #undef SPDK_CONFIG_CUSTOMOCF 00:11:49.510 #undef SPDK_CONFIG_DAOS 00:11:49.510 #define SPDK_CONFIG_DAOS_DIR 00:11:49.510 #define SPDK_CONFIG_DEBUG 1 00:11:49.510 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:49.510 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:49.510 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:49.510 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:49.510 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:49.510 #undef SPDK_CONFIG_DPDK_UADK 00:11:49.510 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:49.510 #define SPDK_CONFIG_EXAMPLES 1 00:11:49.510 #undef SPDK_CONFIG_FC 00:11:49.510 #define SPDK_CONFIG_FC_PATH 00:11:49.510 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:49.510 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:49.510 #define SPDK_CONFIG_FSDEV 1 00:11:49.510 #undef SPDK_CONFIG_FUSE 00:11:49.510 #undef SPDK_CONFIG_FUZZER 00:11:49.510 #define SPDK_CONFIG_FUZZER_LIB 00:11:49.510 #undef SPDK_CONFIG_GOLANG 00:11:49.510 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:49.510 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:49.510 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:49.510 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:49.510 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:49.510 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:49.510 #undef SPDK_CONFIG_HAVE_LZ4 00:11:49.510 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:49.510 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:49.510 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:49.510 #define SPDK_CONFIG_IDXD 1 00:11:49.510 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:49.510 #undef SPDK_CONFIG_IPSEC_MB 00:11:49.510 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:49.510 #define SPDK_CONFIG_ISAL 1 00:11:49.510 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:49.510 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:49.510 #define SPDK_CONFIG_LIBDIR 00:11:49.510 #undef SPDK_CONFIG_LTO 00:11:49.510 #define SPDK_CONFIG_MAX_LCORES 128 00:11:49.510 #define SPDK_CONFIG_NVME_CUSE 1 00:11:49.510 #undef SPDK_CONFIG_OCF 00:11:49.510 #define SPDK_CONFIG_OCF_PATH 00:11:49.510 #define SPDK_CONFIG_OPENSSL_PATH 00:11:49.510 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:49.510 #define SPDK_CONFIG_PGO_DIR 00:11:49.510 #undef SPDK_CONFIG_PGO_USE 00:11:49.510 #define SPDK_CONFIG_PREFIX /usr/local 00:11:49.510 #undef SPDK_CONFIG_RAID5F 00:11:49.510 #undef SPDK_CONFIG_RBD 00:11:49.510 #define SPDK_CONFIG_RDMA 1 00:11:49.510 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:49.510 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:49.510 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:49.510 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:49.510 #define SPDK_CONFIG_SHARED 1 00:11:49.510 #undef SPDK_CONFIG_SMA 00:11:49.510 #define SPDK_CONFIG_TESTS 1 00:11:49.510 #undef SPDK_CONFIG_TSAN 00:11:49.510 #define SPDK_CONFIG_UBLK 1 00:11:49.510 #define SPDK_CONFIG_UBSAN 1 00:11:49.510 #undef SPDK_CONFIG_UNIT_TESTS 00:11:49.510 #undef SPDK_CONFIG_URING 00:11:49.510 #define SPDK_CONFIG_URING_PATH 00:11:49.510 #undef SPDK_CONFIG_URING_ZNS 00:11:49.510 #undef SPDK_CONFIG_USDT 00:11:49.510 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:49.510 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:49.510 #undef SPDK_CONFIG_VFIO_USER 00:11:49.510 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:49.510 #define SPDK_CONFIG_VHOST 1 00:11:49.510 #define SPDK_CONFIG_VIRTIO 1 00:11:49.510 #undef SPDK_CONFIG_VTUNE 00:11:49.510 #define SPDK_CONFIG_VTUNE_DIR 00:11:49.510 #define SPDK_CONFIG_WERROR 1 00:11:49.510 #define SPDK_CONFIG_WPDK_DIR 00:11:49.510 #undef SPDK_CONFIG_XNVME 00:11:49.510 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:49.510 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:49.511 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:49.512 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2914316 ]] 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2914316 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.i0MU6Q 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.i0MU6Q/tests/target /tmp/spdk.i0MU6Q 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=55097462784 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988528128 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6891065344 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982897664 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375269376 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22437888 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993838080 00:11:49.513 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=425984 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:49.773 * Looking for test storage... 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=55097462784 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9105657856 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:49.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.773 --rc genhtml_branch_coverage=1 00:11:49.773 --rc genhtml_function_coverage=1 00:11:49.773 --rc genhtml_legend=1 00:11:49.773 --rc geninfo_all_blocks=1 00:11:49.773 --rc geninfo_unexecuted_blocks=1 00:11:49.773 00:11:49.773 ' 00:11:49.773 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:49.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.773 --rc genhtml_branch_coverage=1 00:11:49.774 --rc genhtml_function_coverage=1 00:11:49.774 --rc genhtml_legend=1 00:11:49.774 --rc geninfo_all_blocks=1 00:11:49.774 --rc geninfo_unexecuted_blocks=1 00:11:49.774 00:11:49.774 ' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:49.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.774 --rc genhtml_branch_coverage=1 00:11:49.774 --rc genhtml_function_coverage=1 00:11:49.774 --rc genhtml_legend=1 00:11:49.774 --rc geninfo_all_blocks=1 00:11:49.774 --rc geninfo_unexecuted_blocks=1 00:11:49.774 00:11:49.774 ' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:49.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.774 --rc genhtml_branch_coverage=1 00:11:49.774 --rc genhtml_function_coverage=1 00:11:49.774 --rc genhtml_legend=1 00:11:49.774 --rc geninfo_all_blocks=1 00:11:49.774 --rc geninfo_unexecuted_blocks=1 00:11:49.774 00:11:49.774 ' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:49.774 19:41:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:52.307 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:52.307 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:52.307 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:52.307 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:52.307 00:11:52.307 --- 10.0.0.2 ping statistics --- 00:11:52.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.307 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:52.307 00:11:52.307 --- 10.0.0.1 ping statistics --- 00:11:52.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.307 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:52.307 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.308 ************************************ 00:11:52.308 START TEST nvmf_filesystem_no_in_capsule 00:11:52.308 ************************************ 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2915959 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2915959 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2915959 ']' 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.308 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.308 [2024-10-13 19:41:41.790065] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:11:52.308 [2024-10-13 19:41:41.790213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.308 [2024-10-13 19:41:41.938688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.308 [2024-10-13 19:41:42.080901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.308 [2024-10-13 19:41:42.080992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.308 [2024-10-13 19:41:42.081018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.308 [2024-10-13 19:41:42.081043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.308 [2024-10-13 19:41:42.081062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.308 [2024-10-13 19:41:42.083939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.308 [2024-10-13 19:41:42.084007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.308 [2024-10-13 19:41:42.084095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.308 [2024-10-13 19:41:42.084101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.241 [2024-10-13 19:41:42.778079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.241 19:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.807 Malloc1 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.807 [2024-10-13 19:41:43.376508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:53.807 { 00:11:53.807 "name": "Malloc1", 00:11:53.807 "aliases": [ 00:11:53.807 "cc2ef202-6f93-4b6d-a2a8-242873295339" 00:11:53.807 ], 00:11:53.807 "product_name": "Malloc disk", 00:11:53.807 "block_size": 512, 00:11:53.807 "num_blocks": 1048576, 00:11:53.807 "uuid": "cc2ef202-6f93-4b6d-a2a8-242873295339", 00:11:53.807 "assigned_rate_limits": { 00:11:53.807 "rw_ios_per_sec": 0, 00:11:53.807 "rw_mbytes_per_sec": 0, 00:11:53.807 "r_mbytes_per_sec": 0, 00:11:53.807 "w_mbytes_per_sec": 0 00:11:53.807 }, 00:11:53.807 "claimed": true, 00:11:53.807 "claim_type": "exclusive_write", 00:11:53.807 "zoned": false, 00:11:53.807 "supported_io_types": { 00:11:53.807 "read": true, 00:11:53.807 "write": true, 00:11:53.807 "unmap": true, 00:11:53.807 "flush": true, 00:11:53.807 "reset": true, 00:11:53.807 "nvme_admin": false, 00:11:53.807 "nvme_io": false, 00:11:53.807 "nvme_io_md": false, 00:11:53.807 "write_zeroes": true, 00:11:53.807 "zcopy": true, 00:11:53.807 "get_zone_info": false, 00:11:53.807 "zone_management": false, 00:11:53.807 "zone_append": false, 00:11:53.807 "compare": false, 00:11:53.807 "compare_and_write": false, 00:11:53.807 "abort": true, 00:11:53.807 "seek_hole": false, 00:11:53.807 "seek_data": false, 00:11:53.807 "copy": true, 00:11:53.807 "nvme_iov_md": false 00:11:53.807 }, 00:11:53.807 "memory_domains": [ 00:11:53.807 { 00:11:53.807 "dma_device_id": "system", 00:11:53.807 "dma_device_type": 1 00:11:53.807 }, 00:11:53.807 { 00:11:53.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.807 "dma_device_type": 2 00:11:53.807 } 00:11:53.807 ], 00:11:53.807 "driver_specific": {} 00:11:53.807 } 00:11:53.807 ]' 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:53.807 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:53.808 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:53.808 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:53.808 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:53.808 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:53.808 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.808 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.373 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.373 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.373 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.373 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.373 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:56.900 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:57.465 19:41:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.397 ************************************ 00:11:58.397 START TEST filesystem_ext4 00:11:58.397 ************************************ 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:58.397 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:58.397 mke2fs 1.47.0 (5-Feb-2023) 00:11:58.397 Discarding device blocks: 0/522240 done 00:11:58.397 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:58.397 Filesystem UUID: aa16e97c-f8e0-4a69-9f0b-92de202c7ed7 00:11:58.397 Superblock backups stored on blocks: 00:11:58.397 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:58.397 00:11:58.397 Allocating group tables: 0/64 done 00:11:58.397 Writing inode tables: 0/64 done 00:11:58.654 Creating journal (8192 blocks): done 00:11:58.654 Writing superblocks and filesystem accounting information: 0/64 done 00:11:58.654 00:11:58.654 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:58.654 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2915959 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.207 00:12:05.207 real 0m6.368s 00:12:05.207 user 0m0.014s 00:12:05.207 sys 0m0.064s 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:05.207 ************************************ 00:12:05.207 END TEST filesystem_ext4 00:12:05.207 ************************************ 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.207 ************************************ 00:12:05.207 START TEST filesystem_btrfs 00:12:05.207 ************************************ 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:05.207 btrfs-progs v6.8.1 00:12:05.207 See https://btrfs.readthedocs.io for more information. 00:12:05.207 00:12:05.207 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:05.207 NOTE: several default settings have changed in version 5.15, please make sure 00:12:05.207 this does not affect your deployments: 00:12:05.207 - DUP for metadata (-m dup) 00:12:05.207 - enabled no-holes (-O no-holes) 00:12:05.207 - enabled free-space-tree (-R free-space-tree) 00:12:05.207 00:12:05.207 Label: (null) 00:12:05.207 UUID: 90f43f6b-641f-4f3c-aa91-da3f98dcf2e3 00:12:05.207 Node size: 16384 00:12:05.207 Sector size: 4096 (CPU page size: 4096) 00:12:05.207 Filesystem size: 510.00MiB 00:12:05.207 Block group profiles: 00:12:05.207 Data: single 8.00MiB 00:12:05.207 Metadata: DUP 32.00MiB 00:12:05.207 System: DUP 8.00MiB 00:12:05.207 SSD detected: yes 00:12:05.207 Zoned device: no 00:12:05.207 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:05.207 Checksum: crc32c 00:12:05.207 Number of devices: 1 00:12:05.207 Devices: 00:12:05.207 ID SIZE PATH 00:12:05.207 1 510.00MiB /dev/nvme0n1p1 00:12:05.207 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:05.207 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2915959 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.775 00:12:05.775 real 0m0.975s 00:12:05.775 user 0m0.023s 00:12:05.775 sys 0m0.104s 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:05.775 ************************************ 00:12:05.775 END TEST filesystem_btrfs 00:12:05.775 ************************************ 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.775 ************************************ 00:12:05.775 START TEST filesystem_xfs 00:12:05.775 ************************************ 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:05.775 19:41:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:05.775 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:05.775 = sectsz=512 attr=2, projid32bit=1 00:12:05.775 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:05.775 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:05.775 data = bsize=4096 blocks=130560, imaxpct=25 00:12:05.775 = sunit=0 swidth=0 blks 00:12:05.775 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:05.775 log =internal log bsize=4096 blocks=16384, version=2 00:12:05.775 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:05.775 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:06.773 Discarding blocks...Done. 00:12:06.773 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:06.774 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2915959 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.299 00:12:09.299 real 0m3.449s 00:12:09.299 user 0m0.015s 00:12:09.299 sys 0m0.062s 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.299 ************************************ 00:12:09.299 END TEST filesystem_xfs 00:12:09.299 ************************************ 00:12:09.299 19:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2915959 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2915959 ']' 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2915959 00:12:09.557 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:09.815 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.815 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2915959 00:12:09.815 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.815 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.815 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2915959' 00:12:09.815 killing process with pid 2915959 00:12:09.815 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2915959 00:12:09.815 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2915959 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:12.344 00:12:12.344 real 0m20.122s 00:12:12.344 user 1m16.070s 00:12:12.344 sys 0m2.631s 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.344 ************************************ 00:12:12.344 END TEST nvmf_filesystem_no_in_capsule 00:12:12.344 ************************************ 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.344 ************************************ 00:12:12.344 START TEST nvmf_filesystem_in_capsule 00:12:12.344 ************************************ 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2918620 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2918620 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2918620 ']' 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.344 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.344 [2024-10-13 19:42:01.957179] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:12.345 [2024-10-13 19:42:01.957313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.345 [2024-10-13 19:42:02.110757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.603 [2024-10-13 19:42:02.252810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.603 [2024-10-13 19:42:02.252899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.603 [2024-10-13 19:42:02.252924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.603 [2024-10-13 19:42:02.252947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.603 [2024-10-13 19:42:02.252967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.603 [2024-10-13 19:42:02.256124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.603 [2024-10-13 19:42:02.256198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.603 [2024-10-13 19:42:02.256295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.603 [2024-10-13 19:42:02.256300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.168 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.168 [2024-10-13 19:42:02.972608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.426 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.426 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:13.426 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.426 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.991 Malloc1 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.991 [2024-10-13 19:42:03.561750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:13.991 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:13.992 { 00:12:13.992 "name": "Malloc1", 00:12:13.992 "aliases": [ 00:12:13.992 "082fe868-e9f9-4f75-8e88-eaa211c894c1" 00:12:13.992 ], 00:12:13.992 "product_name": "Malloc disk", 00:12:13.992 "block_size": 512, 00:12:13.992 "num_blocks": 1048576, 00:12:13.992 "uuid": "082fe868-e9f9-4f75-8e88-eaa211c894c1", 00:12:13.992 "assigned_rate_limits": { 00:12:13.992 "rw_ios_per_sec": 0, 00:12:13.992 "rw_mbytes_per_sec": 0, 00:12:13.992 "r_mbytes_per_sec": 0, 00:12:13.992 "w_mbytes_per_sec": 0 00:12:13.992 }, 00:12:13.992 "claimed": true, 00:12:13.992 "claim_type": "exclusive_write", 00:12:13.992 "zoned": false, 00:12:13.992 "supported_io_types": { 00:12:13.992 "read": true, 00:12:13.992 "write": true, 00:12:13.992 "unmap": true, 00:12:13.992 "flush": true, 00:12:13.992 "reset": true, 00:12:13.992 "nvme_admin": false, 00:12:13.992 "nvme_io": false, 00:12:13.992 "nvme_io_md": false, 00:12:13.992 "write_zeroes": true, 00:12:13.992 "zcopy": true, 00:12:13.992 "get_zone_info": false, 00:12:13.992 "zone_management": false, 00:12:13.992 "zone_append": false, 00:12:13.992 "compare": false, 00:12:13.992 "compare_and_write": false, 00:12:13.992 "abort": true, 00:12:13.992 "seek_hole": false, 00:12:13.992 "seek_data": false, 00:12:13.992 "copy": true, 00:12:13.992 "nvme_iov_md": false 00:12:13.992 }, 00:12:13.992 "memory_domains": [ 00:12:13.992 { 00:12:13.992 "dma_device_id": "system", 00:12:13.992 "dma_device_type": 1 00:12:13.992 }, 00:12:13.992 { 00:12:13.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.992 "dma_device_type": 2 00:12:13.992 } 00:12:13.992 ], 00:12:13.992 "driver_specific": {} 00:12:13.992 } 00:12:13.992 ]' 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:13.992 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.558 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.558 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:14.558 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.558 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:14.558 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:16.455 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:16.455 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:16.455 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:16.714 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:17.279 19:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.212 ************************************ 00:12:18.212 START TEST filesystem_in_capsule_ext4 00:12:18.212 ************************************ 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:18.212 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:18.212 mke2fs 1.47.0 (5-Feb-2023) 00:12:18.212 Discarding device blocks: 0/522240 done 00:12:18.471 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:18.471 Filesystem UUID: 913b3bb3-d54c-4d92-8da3-d59643d24cf7 00:12:18.471 Superblock backups stored on blocks: 00:12:18.471 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:18.471 00:12:18.471 Allocating group tables: 0/64 done 00:12:18.471 Writing inode tables: 0/64 done 00:12:18.471 Creating journal (8192 blocks): done 00:12:18.471 Writing superblocks and filesystem accounting information: 0/64 done 00:12:18.471 00:12:18.471 19:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:18.471 19:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2918620 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.026 00:12:25.026 real 0m6.019s 00:12:25.026 user 0m0.016s 00:12:25.026 sys 0m0.062s 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:25.026 ************************************ 00:12:25.026 END TEST filesystem_in_capsule_ext4 00:12:25.026 ************************************ 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.026 ************************************ 00:12:25.026 START TEST filesystem_in_capsule_btrfs 00:12:25.026 ************************************ 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:25.026 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:25.026 btrfs-progs v6.8.1 00:12:25.026 See https://btrfs.readthedocs.io for more information. 00:12:25.026 00:12:25.026 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:25.026 NOTE: several default settings have changed in version 5.15, please make sure 00:12:25.026 this does not affect your deployments: 00:12:25.026 - DUP for metadata (-m dup) 00:12:25.026 - enabled no-holes (-O no-holes) 00:12:25.026 - enabled free-space-tree (-R free-space-tree) 00:12:25.026 00:12:25.026 Label: (null) 00:12:25.026 UUID: bbae10d7-8388-4bb3-9efe-8b6cf6c8b9ec 00:12:25.026 Node size: 16384 00:12:25.026 Sector size: 4096 (CPU page size: 4096) 00:12:25.026 Filesystem size: 510.00MiB 00:12:25.026 Block group profiles: 00:12:25.026 Data: single 8.00MiB 00:12:25.026 Metadata: DUP 32.00MiB 00:12:25.026 System: DUP 8.00MiB 00:12:25.026 SSD detected: yes 00:12:25.026 Zoned device: no 00:12:25.026 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:25.026 Checksum: crc32c 00:12:25.026 Number of devices: 1 00:12:25.026 Devices: 00:12:25.026 ID SIZE PATH 00:12:25.026 1 510.00MiB /dev/nvme0n1p1 00:12:25.026 00:12:25.026 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:25.026 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.026 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2918620 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.027 00:12:25.027 real 0m0.807s 00:12:25.027 user 0m0.017s 00:12:25.027 sys 0m0.103s 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.027 ************************************ 00:12:25.027 END TEST filesystem_in_capsule_btrfs 00:12:25.027 ************************************ 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.027 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.285 ************************************ 00:12:25.285 START TEST filesystem_in_capsule_xfs 00:12:25.285 ************************************ 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:25.285 19:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:25.285 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:25.285 = sectsz=512 attr=2, projid32bit=1 00:12:25.285 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:25.285 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:25.285 data = bsize=4096 blocks=130560, imaxpct=25 00:12:25.285 = sunit=0 swidth=0 blks 00:12:25.285 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:25.285 log =internal log bsize=4096 blocks=16384, version=2 00:12:25.285 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:25.285 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:26.219 Discarding blocks...Done. 00:12:26.219 19:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:26.219 19:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2918620 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.746 00:12:28.746 real 0m3.360s 00:12:28.746 user 0m0.013s 00:12:28.746 sys 0m0.063s 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.746 ************************************ 00:12:28.746 END TEST filesystem_in_capsule_xfs 00:12:28.746 ************************************ 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:28.746 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2918620 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2918620 ']' 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2918620 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2918620 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2918620' 00:12:29.004 killing process with pid 2918620 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2918620 00:12:29.004 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2918620 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:31.534 00:12:31.534 real 0m19.243s 00:12:31.534 user 1m12.757s 00:12:31.534 sys 0m2.481s 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.534 ************************************ 00:12:31.534 END TEST nvmf_filesystem_in_capsule 00:12:31.534 ************************************ 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:31.534 rmmod nvme_tcp 00:12:31.534 rmmod nvme_fabrics 00:12:31.534 rmmod nvme_keyring 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.534 19:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.436 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.436 00:12:33.436 real 0m44.120s 00:12:33.436 user 2m29.932s 00:12:33.436 sys 0m6.773s 00:12:33.436 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.437 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:33.437 ************************************ 00:12:33.437 END TEST nvmf_filesystem 00:12:33.437 ************************************ 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.696 ************************************ 00:12:33.696 START TEST nvmf_target_discovery 00:12:33.696 ************************************ 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:33.696 * Looking for test storage... 00:12:33.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:33.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.696 --rc genhtml_branch_coverage=1 00:12:33.696 --rc genhtml_function_coverage=1 00:12:33.696 --rc genhtml_legend=1 00:12:33.696 --rc geninfo_all_blocks=1 00:12:33.696 --rc geninfo_unexecuted_blocks=1 00:12:33.696 00:12:33.696 ' 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:33.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.696 --rc genhtml_branch_coverage=1 00:12:33.696 --rc genhtml_function_coverage=1 00:12:33.696 --rc genhtml_legend=1 00:12:33.696 --rc geninfo_all_blocks=1 00:12:33.696 --rc geninfo_unexecuted_blocks=1 00:12:33.696 00:12:33.696 ' 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:33.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.696 --rc genhtml_branch_coverage=1 00:12:33.696 --rc genhtml_function_coverage=1 00:12:33.696 --rc genhtml_legend=1 00:12:33.696 --rc geninfo_all_blocks=1 00:12:33.696 --rc geninfo_unexecuted_blocks=1 00:12:33.696 00:12:33.696 ' 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:33.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.696 --rc genhtml_branch_coverage=1 00:12:33.696 --rc genhtml_function_coverage=1 00:12:33.696 --rc genhtml_legend=1 00:12:33.696 --rc geninfo_all_blocks=1 00:12:33.696 --rc geninfo_unexecuted_blocks=1 00:12:33.696 00:12:33.696 ' 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.696 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.697 19:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.228 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.228 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.228 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.229 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.229 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:12:36.229 00:12:36.229 --- 10.0.0.2 ping statistics --- 00:12:36.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.229 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:12:36.229 00:12:36.229 --- 10.0.0.1 ping statistics --- 00:12:36.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.229 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=2923497 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 2923497 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2923497 ']' 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.229 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.229 [2024-10-13 19:42:25.775331] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:36.229 [2024-10-13 19:42:25.775497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.229 [2024-10-13 19:42:25.922334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.487 [2024-10-13 19:42:26.068182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.487 [2024-10-13 19:42:26.068267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.487 [2024-10-13 19:42:26.068306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.487 [2024-10-13 19:42:26.068343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.487 [2024-10-13 19:42:26.068375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.487 [2024-10-13 19:42:26.071338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.487 [2024-10-13 19:42:26.071408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.487 [2024-10-13 19:42:26.071455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.487 [2024-10-13 19:42:26.071456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 [2024-10-13 19:42:26.759065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 Null1 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 [2024-10-13 19:42:26.809291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 Null2 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.053 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.054 Null3 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.054 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.311 Null4 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.311 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.312 19:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:37.570 00:12:37.570 Discovery Log Number of Records 6, Generation counter 6 00:12:37.570 =====Discovery Log Entry 0====== 00:12:37.570 trtype: tcp 00:12:37.570 adrfam: ipv4 00:12:37.570 subtype: current discovery subsystem 00:12:37.570 treq: not required 00:12:37.570 portid: 0 00:12:37.570 trsvcid: 4420 00:12:37.570 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:37.570 traddr: 10.0.0.2 00:12:37.570 eflags: explicit discovery connections, duplicate discovery information 00:12:37.570 sectype: none 00:12:37.570 =====Discovery Log Entry 1====== 00:12:37.570 trtype: tcp 00:12:37.570 adrfam: ipv4 00:12:37.570 subtype: nvme subsystem 00:12:37.570 treq: not required 00:12:37.570 portid: 0 00:12:37.570 trsvcid: 4420 00:12:37.570 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:37.570 traddr: 10.0.0.2 00:12:37.570 eflags: none 00:12:37.570 sectype: none 00:12:37.570 =====Discovery Log Entry 2====== 00:12:37.570 trtype: tcp 00:12:37.570 adrfam: ipv4 00:12:37.570 subtype: nvme subsystem 00:12:37.570 treq: not required 00:12:37.570 portid: 0 00:12:37.570 trsvcid: 4420 00:12:37.570 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:37.570 traddr: 10.0.0.2 00:12:37.570 eflags: none 00:12:37.570 sectype: none 00:12:37.570 =====Discovery Log Entry 3====== 00:12:37.570 trtype: tcp 00:12:37.570 adrfam: ipv4 00:12:37.570 subtype: nvme subsystem 00:12:37.570 treq: not required 00:12:37.570 portid: 0 00:12:37.570 trsvcid: 4420 00:12:37.570 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:37.570 traddr: 10.0.0.2 00:12:37.570 eflags: none 00:12:37.570 sectype: none 00:12:37.570 =====Discovery Log Entry 4====== 00:12:37.570 trtype: tcp 00:12:37.570 adrfam: ipv4 00:12:37.570 subtype: nvme subsystem 00:12:37.570 treq: not required 00:12:37.570 portid: 0 00:12:37.570 trsvcid: 4420 00:12:37.570 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:37.570 traddr: 10.0.0.2 00:12:37.570 eflags: none 00:12:37.570 sectype: none 00:12:37.570 =====Discovery Log Entry 5====== 00:12:37.570 trtype: tcp 00:12:37.570 adrfam: ipv4 00:12:37.570 subtype: discovery subsystem referral 00:12:37.570 treq: not required 00:12:37.570 portid: 0 00:12:37.570 trsvcid: 4430 00:12:37.570 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:37.570 traddr: 10.0.0.2 00:12:37.570 eflags: none 00:12:37.570 sectype: none 00:12:37.570 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:37.570 Perform nvmf subsystem discovery via RPC 00:12:37.570 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:37.570 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.570 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.570 [ 00:12:37.570 { 00:12:37.570 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:37.570 "subtype": "Discovery", 00:12:37.570 "listen_addresses": [ 00:12:37.570 { 00:12:37.570 "trtype": "TCP", 00:12:37.570 "adrfam": "IPv4", 00:12:37.570 "traddr": "10.0.0.2", 00:12:37.570 "trsvcid": "4420" 00:12:37.570 } 00:12:37.570 ], 00:12:37.570 "allow_any_host": true, 00:12:37.570 "hosts": [] 00:12:37.570 }, 00:12:37.570 { 00:12:37.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.570 "subtype": "NVMe", 00:12:37.570 "listen_addresses": [ 00:12:37.570 { 00:12:37.570 "trtype": "TCP", 00:12:37.570 "adrfam": "IPv4", 00:12:37.570 "traddr": "10.0.0.2", 00:12:37.570 "trsvcid": "4420" 00:12:37.570 } 00:12:37.570 ], 00:12:37.570 "allow_any_host": true, 00:12:37.570 "hosts": [], 00:12:37.570 "serial_number": "SPDK00000000000001", 00:12:37.570 "model_number": "SPDK bdev Controller", 00:12:37.570 "max_namespaces": 32, 00:12:37.570 "min_cntlid": 1, 00:12:37.570 "max_cntlid": 65519, 00:12:37.570 "namespaces": [ 00:12:37.570 { 00:12:37.570 "nsid": 1, 00:12:37.570 "bdev_name": "Null1", 00:12:37.570 "name": "Null1", 00:12:37.570 "nguid": "018D83FD5B274B7386892758A6919B96", 00:12:37.570 "uuid": "018d83fd-5b27-4b73-8689-2758a6919b96" 00:12:37.570 } 00:12:37.570 ] 00:12:37.570 }, 00:12:37.570 { 00:12:37.570 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:37.570 "subtype": "NVMe", 00:12:37.570 "listen_addresses": [ 00:12:37.570 { 00:12:37.570 "trtype": "TCP", 00:12:37.570 "adrfam": "IPv4", 00:12:37.570 "traddr": "10.0.0.2", 00:12:37.570 "trsvcid": "4420" 00:12:37.570 } 00:12:37.570 ], 00:12:37.570 "allow_any_host": true, 00:12:37.570 "hosts": [], 00:12:37.570 "serial_number": "SPDK00000000000002", 00:12:37.570 "model_number": "SPDK bdev Controller", 00:12:37.570 "max_namespaces": 32, 00:12:37.570 "min_cntlid": 1, 00:12:37.570 "max_cntlid": 65519, 00:12:37.570 "namespaces": [ 00:12:37.570 { 00:12:37.570 "nsid": 1, 00:12:37.570 "bdev_name": "Null2", 00:12:37.570 "name": "Null2", 00:12:37.571 "nguid": "5DFB199A9E164339AB3554209E1EB259", 00:12:37.571 "uuid": "5dfb199a-9e16-4339-ab35-54209e1eb259" 00:12:37.571 } 00:12:37.571 ] 00:12:37.571 }, 00:12:37.571 { 00:12:37.571 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:37.571 "subtype": "NVMe", 00:12:37.571 "listen_addresses": [ 00:12:37.571 { 00:12:37.571 "trtype": "TCP", 00:12:37.571 "adrfam": "IPv4", 00:12:37.571 "traddr": "10.0.0.2", 00:12:37.571 "trsvcid": "4420" 00:12:37.571 } 00:12:37.571 ], 00:12:37.571 "allow_any_host": true, 00:12:37.571 "hosts": [], 00:12:37.571 "serial_number": "SPDK00000000000003", 00:12:37.571 "model_number": "SPDK bdev Controller", 00:12:37.571 "max_namespaces": 32, 00:12:37.571 "min_cntlid": 1, 00:12:37.571 "max_cntlid": 65519, 00:12:37.571 "namespaces": [ 00:12:37.571 { 00:12:37.571 "nsid": 1, 00:12:37.571 "bdev_name": "Null3", 00:12:37.571 "name": "Null3", 00:12:37.571 "nguid": "CBA2F2A9B8204D5287FF41454B8F333B", 00:12:37.571 "uuid": "cba2f2a9-b820-4d52-87ff-41454b8f333b" 00:12:37.571 } 00:12:37.571 ] 00:12:37.571 }, 00:12:37.571 { 00:12:37.571 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:37.571 "subtype": "NVMe", 00:12:37.571 "listen_addresses": [ 00:12:37.571 { 00:12:37.571 "trtype": "TCP", 00:12:37.571 "adrfam": "IPv4", 00:12:37.571 "traddr": "10.0.0.2", 00:12:37.571 "trsvcid": "4420" 00:12:37.571 } 00:12:37.571 ], 00:12:37.571 "allow_any_host": true, 00:12:37.571 "hosts": [], 00:12:37.571 "serial_number": "SPDK00000000000004", 00:12:37.571 "model_number": "SPDK bdev Controller", 00:12:37.571 "max_namespaces": 32, 00:12:37.571 "min_cntlid": 1, 00:12:37.571 "max_cntlid": 65519, 00:12:37.571 "namespaces": [ 00:12:37.571 { 00:12:37.571 "nsid": 1, 00:12:37.571 "bdev_name": "Null4", 00:12:37.571 "name": "Null4", 00:12:37.571 "nguid": "948A7BD3EE234ABE956674D5A47ED562", 00:12:37.571 "uuid": "948a7bd3-ee23-4abe-9566-74d5a47ed562" 00:12:37.571 } 00:12:37.571 ] 00:12:37.571 } 00:12:37.571 ] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.571 rmmod nvme_tcp 00:12:37.571 rmmod nvme_fabrics 00:12:37.571 rmmod nvme_keyring 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 2923497 ']' 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 2923497 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2923497 ']' 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2923497 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.571 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2923497 00:12:37.829 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.829 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.830 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2923497' 00:12:37.830 killing process with pid 2923497 00:12:37.830 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2923497 00:12:37.830 19:42:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2923497 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.763 19:42:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.298 00:12:41.298 real 0m7.285s 00:12:41.298 user 0m9.679s 00:12:41.298 sys 0m2.139s 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.298 ************************************ 00:12:41.298 END TEST nvmf_target_discovery 00:12:41.298 ************************************ 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.298 ************************************ 00:12:41.298 START TEST nvmf_referrals 00:12:41.298 ************************************ 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:41.298 * Looking for test storage... 00:12:41.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:41.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.298 --rc genhtml_branch_coverage=1 00:12:41.298 --rc genhtml_function_coverage=1 00:12:41.298 --rc genhtml_legend=1 00:12:41.298 --rc geninfo_all_blocks=1 00:12:41.298 --rc geninfo_unexecuted_blocks=1 00:12:41.298 00:12:41.298 ' 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:41.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.298 --rc genhtml_branch_coverage=1 00:12:41.298 --rc genhtml_function_coverage=1 00:12:41.298 --rc genhtml_legend=1 00:12:41.298 --rc geninfo_all_blocks=1 00:12:41.298 --rc geninfo_unexecuted_blocks=1 00:12:41.298 00:12:41.298 ' 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:41.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.298 --rc genhtml_branch_coverage=1 00:12:41.298 --rc genhtml_function_coverage=1 00:12:41.298 --rc genhtml_legend=1 00:12:41.298 --rc geninfo_all_blocks=1 00:12:41.298 --rc geninfo_unexecuted_blocks=1 00:12:41.298 00:12:41.298 ' 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:41.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.298 --rc genhtml_branch_coverage=1 00:12:41.298 --rc genhtml_function_coverage=1 00:12:41.298 --rc genhtml_legend=1 00:12:41.298 --rc geninfo_all_blocks=1 00:12:41.298 --rc geninfo_unexecuted_blocks=1 00:12:41.298 00:12:41.298 ' 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.298 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:41.299 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.200 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:43.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:43.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:43.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:43.201 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.201 19:42:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.201 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.201 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.201 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.459 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.459 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.459 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.459 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.459 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:12:43.459 00:12:43.459 --- 10.0.0.2 ping statistics --- 00:12:43.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.459 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:12:43.459 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:12:43.460 00:12:43.460 --- 10.0.0.1 ping statistics --- 00:12:43.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.460 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=2925849 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 2925849 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2925849 ']' 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.460 19:42:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.460 [2024-10-13 19:42:33.190973] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:43.460 [2024-10-13 19:42:33.191115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.718 [2024-10-13 19:42:33.336678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.718 [2024-10-13 19:42:33.481786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.718 [2024-10-13 19:42:33.481870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.718 [2024-10-13 19:42:33.481909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.718 [2024-10-13 19:42:33.481947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.718 [2024-10-13 19:42:33.481978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.718 [2024-10-13 19:42:33.484852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.718 [2024-10-13 19:42:33.484910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.718 [2024-10-13 19:42:33.484969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.718 [2024-10-13 19:42:33.484973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.653 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.654 [2024-10-13 19:42:34.178530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.654 [2024-10-13 19:42:34.200432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.654 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.912 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.170 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.428 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:45.428 19:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.428 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.713 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.994 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.252 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.510 rmmod nvme_tcp 00:12:46.510 rmmod nvme_fabrics 00:12:46.510 rmmod nvme_keyring 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 2925849 ']' 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 2925849 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2925849 ']' 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2925849 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2925849 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2925849' 00:12:46.510 killing process with pid 2925849 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2925849 00:12:46.510 19:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2925849 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.883 19:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:49.790 00:12:49.790 real 0m8.846s 00:12:49.790 user 0m16.232s 00:12:49.790 sys 0m2.498s 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.790 ************************************ 00:12:49.790 END TEST nvmf_referrals 00:12:49.790 ************************************ 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.790 ************************************ 00:12:49.790 START TEST nvmf_connect_disconnect 00:12:49.790 ************************************ 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:49.790 * Looking for test storage... 00:12:49.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:49.790 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:50.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.051 --rc genhtml_branch_coverage=1 00:12:50.051 --rc genhtml_function_coverage=1 00:12:50.051 --rc genhtml_legend=1 00:12:50.051 --rc geninfo_all_blocks=1 00:12:50.051 --rc geninfo_unexecuted_blocks=1 00:12:50.051 00:12:50.051 ' 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:50.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.051 --rc genhtml_branch_coverage=1 00:12:50.051 --rc genhtml_function_coverage=1 00:12:50.051 --rc genhtml_legend=1 00:12:50.051 --rc geninfo_all_blocks=1 00:12:50.051 --rc geninfo_unexecuted_blocks=1 00:12:50.051 00:12:50.051 ' 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:50.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.051 --rc genhtml_branch_coverage=1 00:12:50.051 --rc genhtml_function_coverage=1 00:12:50.051 --rc genhtml_legend=1 00:12:50.051 --rc geninfo_all_blocks=1 00:12:50.051 --rc geninfo_unexecuted_blocks=1 00:12:50.051 00:12:50.051 ' 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:50.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.051 --rc genhtml_branch_coverage=1 00:12:50.051 --rc genhtml_function_coverage=1 00:12:50.051 --rc genhtml_legend=1 00:12:50.051 --rc geninfo_all_blocks=1 00:12:50.051 --rc geninfo_unexecuted_blocks=1 00:12:50.051 00:12:50.051 ' 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.051 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.052 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.955 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:51.955 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.214 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.214 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.214 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:12:52.214 00:12:52.214 --- 10.0.0.2 ping statistics --- 00:12:52.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.214 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:52.214 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:12:52.215 00:12:52.215 --- 10.0.0.1 ping statistics --- 00:12:52.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.215 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=2928392 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 2928392 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2928392 ']' 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:52.215 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.215 [2024-10-13 19:42:41.904623] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:52.215 [2024-10-13 19:42:41.904767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.473 [2024-10-13 19:42:42.037570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.473 [2024-10-13 19:42:42.173697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.473 [2024-10-13 19:42:42.173781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.473 [2024-10-13 19:42:42.173819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.473 [2024-10-13 19:42:42.173857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.473 [2024-10-13 19:42:42.173888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.473 [2024-10-13 19:42:42.176835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.473 [2024-10-13 19:42:42.176908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.473 [2024-10-13 19:42:42.177001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.473 [2024-10-13 19:42:42.177004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.406 [2024-10-13 19:42:42.910529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.406 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.406 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.407 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.407 [2024-10-13 19:42:43.026553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.407 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.407 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:53.407 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:53.407 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:53.407 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:55.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:48.858 rmmod nvme_tcp 00:16:48.858 rmmod nvme_fabrics 00:16:48.858 rmmod nvme_keyring 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 2928392 ']' 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 2928392 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2928392 ']' 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2928392 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2928392 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2928392' 00:16:48.858 killing process with pid 2928392 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2928392 00:16:48.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2928392 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.793 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:52.329 00:16:52.329 real 4m2.129s 00:16:52.329 user 15m16.310s 00:16:52.329 sys 0m39.234s 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 ************************************ 00:16:52.329 END TEST nvmf_connect_disconnect 00:16:52.329 ************************************ 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 ************************************ 00:16:52.329 START TEST nvmf_multitarget 00:16:52.329 ************************************ 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:52.329 * Looking for test storage... 00:16:52.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:52.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.329 --rc genhtml_branch_coverage=1 00:16:52.329 --rc genhtml_function_coverage=1 00:16:52.329 --rc genhtml_legend=1 00:16:52.329 --rc geninfo_all_blocks=1 00:16:52.329 --rc geninfo_unexecuted_blocks=1 00:16:52.329 00:16:52.329 ' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:52.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.329 --rc genhtml_branch_coverage=1 00:16:52.329 --rc genhtml_function_coverage=1 00:16:52.329 --rc genhtml_legend=1 00:16:52.329 --rc geninfo_all_blocks=1 00:16:52.329 --rc geninfo_unexecuted_blocks=1 00:16:52.329 00:16:52.329 ' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:52.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.329 --rc genhtml_branch_coverage=1 00:16:52.329 --rc genhtml_function_coverage=1 00:16:52.329 --rc genhtml_legend=1 00:16:52.329 --rc geninfo_all_blocks=1 00:16:52.329 --rc geninfo_unexecuted_blocks=1 00:16:52.329 00:16:52.329 ' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:52.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.329 --rc genhtml_branch_coverage=1 00:16:52.329 --rc genhtml_function_coverage=1 00:16:52.329 --rc genhtml_legend=1 00:16:52.329 --rc geninfo_all_blocks=1 00:16:52.329 --rc geninfo_unexecuted_blocks=1 00:16:52.329 00:16:52.329 ' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.329 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:52.330 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:54.231 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:54.232 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:54.232 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:54.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:54.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:54.232 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:54.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:16:54.232 00:16:54.232 --- 10.0.0.2 ping statistics --- 00:16:54.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.232 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:16:54.232 00:16:54.232 --- 10.0.0.1 ping statistics --- 00:16:54.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.232 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:54.232 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=2960011 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 2960011 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2960011 ']' 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.491 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:54.491 [2024-10-13 19:46:44.170814] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:16:54.491 [2024-10-13 19:46:44.170952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.748 [2024-10-13 19:46:44.313665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.748 [2024-10-13 19:46:44.457381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.748 [2024-10-13 19:46:44.457486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.748 [2024-10-13 19:46:44.457524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.748 [2024-10-13 19:46:44.457563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.748 [2024-10-13 19:46:44.457595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.748 [2024-10-13 19:46:44.460551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.748 [2024-10-13 19:46:44.460626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.748 [2024-10-13 19:46:44.460737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.748 [2024-10-13 19:46:44.460741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.313 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.313 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:55.313 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:55.313 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.313 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:55.570 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.570 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:55.570 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:55.570 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:55.570 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:55.570 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:55.828 "nvmf_tgt_1" 00:16:55.828 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:55.828 "nvmf_tgt_2" 00:16:55.828 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:55.828 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:56.086 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:56.086 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:56.086 true 00:16:56.086 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:56.343 true 00:16:56.343 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:56.343 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:56.343 rmmod nvme_tcp 00:16:56.343 rmmod nvme_fabrics 00:16:56.343 rmmod nvme_keyring 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 2960011 ']' 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 2960011 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2960011 ']' 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2960011 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2960011 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2960011' 00:16:56.343 killing process with pid 2960011 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2960011 00:16:56.343 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2960011 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.717 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.648 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.648 00:16:59.648 real 0m7.584s 00:16:59.648 user 0m12.183s 00:16:59.648 sys 0m2.130s 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.649 ************************************ 00:16:59.649 END TEST nvmf_multitarget 00:16:59.649 ************************************ 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.649 ************************************ 00:16:59.649 START TEST nvmf_rpc 00:16:59.649 ************************************ 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:59.649 * Looking for test storage... 00:16:59.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:59.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.649 --rc genhtml_branch_coverage=1 00:16:59.649 --rc genhtml_function_coverage=1 00:16:59.649 --rc genhtml_legend=1 00:16:59.649 --rc geninfo_all_blocks=1 00:16:59.649 --rc geninfo_unexecuted_blocks=1 00:16:59.649 00:16:59.649 ' 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:59.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.649 --rc genhtml_branch_coverage=1 00:16:59.649 --rc genhtml_function_coverage=1 00:16:59.649 --rc genhtml_legend=1 00:16:59.649 --rc geninfo_all_blocks=1 00:16:59.649 --rc geninfo_unexecuted_blocks=1 00:16:59.649 00:16:59.649 ' 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:59.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.649 --rc genhtml_branch_coverage=1 00:16:59.649 --rc genhtml_function_coverage=1 00:16:59.649 --rc genhtml_legend=1 00:16:59.649 --rc geninfo_all_blocks=1 00:16:59.649 --rc geninfo_unexecuted_blocks=1 00:16:59.649 00:16:59.649 ' 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:59.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.649 --rc genhtml_branch_coverage=1 00:16:59.649 --rc genhtml_function_coverage=1 00:16:59.649 --rc genhtml_legend=1 00:16:59.649 --rc geninfo_all_blocks=1 00:16:59.649 --rc geninfo_unexecuted_blocks=1 00:16:59.649 00:16:59.649 ' 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.649 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:59.909 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.811 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.811 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.811 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.811 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.811 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:01.812 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:01.812 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.812 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.812 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.812 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.812 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:01.812 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:17:02.070 00:17:02.070 --- 10.0.0.2 ping statistics --- 00:17:02.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.070 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:17:02.070 00:17:02.070 --- 10.0.0.1 ping statistics --- 00:17:02.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.070 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=2962369 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 2962369 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2962369 ']' 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.070 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 [2024-10-13 19:46:51.781152] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:02.070 [2024-10-13 19:46:51.781299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.329 [2024-10-13 19:46:51.921644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.329 [2024-10-13 19:46:52.059231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.329 [2024-10-13 19:46:52.059321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.329 [2024-10-13 19:46:52.059360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.329 [2024-10-13 19:46:52.059410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.329 [2024-10-13 19:46:52.059454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.329 [2024-10-13 19:46:52.062467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.329 [2024-10-13 19:46:52.062498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.329 [2024-10-13 19:46:52.062584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.329 [2024-10-13 19:46:52.062585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:03.263 "tick_rate": 2700000000, 00:17:03.263 "poll_groups": [ 00:17:03.263 { 00:17:03.263 "name": "nvmf_tgt_poll_group_000", 00:17:03.263 "admin_qpairs": 0, 00:17:03.263 "io_qpairs": 0, 00:17:03.263 "current_admin_qpairs": 0, 00:17:03.263 "current_io_qpairs": 0, 00:17:03.263 "pending_bdev_io": 0, 00:17:03.263 "completed_nvme_io": 0, 00:17:03.263 "transports": [] 00:17:03.263 }, 00:17:03.263 { 00:17:03.263 "name": "nvmf_tgt_poll_group_001", 00:17:03.263 "admin_qpairs": 0, 00:17:03.263 "io_qpairs": 0, 00:17:03.263 "current_admin_qpairs": 0, 00:17:03.263 "current_io_qpairs": 0, 00:17:03.263 "pending_bdev_io": 0, 00:17:03.263 "completed_nvme_io": 0, 00:17:03.263 "transports": [] 00:17:03.263 }, 00:17:03.263 { 00:17:03.263 "name": "nvmf_tgt_poll_group_002", 00:17:03.263 "admin_qpairs": 0, 00:17:03.263 "io_qpairs": 0, 00:17:03.263 "current_admin_qpairs": 0, 00:17:03.263 "current_io_qpairs": 0, 00:17:03.263 "pending_bdev_io": 0, 00:17:03.263 "completed_nvme_io": 0, 00:17:03.263 "transports": [] 00:17:03.263 }, 00:17:03.263 { 00:17:03.263 "name": "nvmf_tgt_poll_group_003", 00:17:03.263 "admin_qpairs": 0, 00:17:03.263 "io_qpairs": 0, 00:17:03.263 "current_admin_qpairs": 0, 00:17:03.263 "current_io_qpairs": 0, 00:17:03.263 "pending_bdev_io": 0, 00:17:03.263 "completed_nvme_io": 0, 00:17:03.263 "transports": [] 00:17:03.263 } 00:17:03.263 ] 00:17:03.263 }' 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.263 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.264 [2024-10-13 19:46:52.855983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:03.264 "tick_rate": 2700000000, 00:17:03.264 "poll_groups": [ 00:17:03.264 { 00:17:03.264 "name": "nvmf_tgt_poll_group_000", 00:17:03.264 "admin_qpairs": 0, 00:17:03.264 "io_qpairs": 0, 00:17:03.264 "current_admin_qpairs": 0, 00:17:03.264 "current_io_qpairs": 0, 00:17:03.264 "pending_bdev_io": 0, 00:17:03.264 "completed_nvme_io": 0, 00:17:03.264 "transports": [ 00:17:03.264 { 00:17:03.264 "trtype": "TCP" 00:17:03.264 } 00:17:03.264 ] 00:17:03.264 }, 00:17:03.264 { 00:17:03.264 "name": "nvmf_tgt_poll_group_001", 00:17:03.264 "admin_qpairs": 0, 00:17:03.264 "io_qpairs": 0, 00:17:03.264 "current_admin_qpairs": 0, 00:17:03.264 "current_io_qpairs": 0, 00:17:03.264 "pending_bdev_io": 0, 00:17:03.264 "completed_nvme_io": 0, 00:17:03.264 "transports": [ 00:17:03.264 { 00:17:03.264 "trtype": "TCP" 00:17:03.264 } 00:17:03.264 ] 00:17:03.264 }, 00:17:03.264 { 00:17:03.264 "name": "nvmf_tgt_poll_group_002", 00:17:03.264 "admin_qpairs": 0, 00:17:03.264 "io_qpairs": 0, 00:17:03.264 "current_admin_qpairs": 0, 00:17:03.264 "current_io_qpairs": 0, 00:17:03.264 "pending_bdev_io": 0, 00:17:03.264 "completed_nvme_io": 0, 00:17:03.264 "transports": [ 00:17:03.264 { 00:17:03.264 "trtype": "TCP" 00:17:03.264 } 00:17:03.264 ] 00:17:03.264 }, 00:17:03.264 { 00:17:03.264 "name": "nvmf_tgt_poll_group_003", 00:17:03.264 "admin_qpairs": 0, 00:17:03.264 "io_qpairs": 0, 00:17:03.264 "current_admin_qpairs": 0, 00:17:03.264 "current_io_qpairs": 0, 00:17:03.264 "pending_bdev_io": 0, 00:17:03.264 "completed_nvme_io": 0, 00:17:03.264 "transports": [ 00:17:03.264 { 00:17:03.264 "trtype": "TCP" 00:17:03.264 } 00:17:03.264 ] 00:17:03.264 } 00:17:03.264 ] 00:17:03.264 }' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.264 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.264 Malloc1 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.264 [2024-10-13 19:46:53.074009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:03.264 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:03.522 [2024-10-13 19:46:53.097290] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:03.522 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:03.522 could not add new controller: failed to write to nvme-fabrics device 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.522 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:04.088 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:04.088 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:04.088 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.088 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:04.088 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:06.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:06.650 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.650 [2024-10-13 19:46:55.987146] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:06.650 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:06.650 could not add new controller: failed to write to nvme-fabrics device 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.650 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.932 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.932 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:06.932 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.932 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:06.932 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:08.833 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:08.833 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:08.833 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.833 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:08.833 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.833 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:08.833 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.091 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.091 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:09.091 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:09.091 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 [2024-10-13 19:46:58.837886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.092 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.658 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:09.658 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.658 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.658 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:09.658 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.186 [2024-10-13 19:47:01.675739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.186 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:12.751 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:12.751 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:12.751 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.751 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:12.751 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:14.686 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:14.686 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:14.686 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.686 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:14.686 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.686 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:14.686 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.944 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.945 [2024-10-13 19:47:04.617832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.945 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.509 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.509 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.509 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.509 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:15.509 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.035 [2024-10-13 19:47:07.462440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.035 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.296 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.296 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:18.296 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.296 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:18.296 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.831 [2024-10-13 19:47:10.315415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.831 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.398 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.398 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:21.398 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.398 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:21.398 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:23.295 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:23.295 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:23.295 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.295 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:23.295 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.295 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:23.295 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.553 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.553 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:23.553 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:23.553 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.553 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:23.553 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 [2024-10-13 19:47:13.200635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 [2024-10-13 19:47:13.248730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 [2024-10-13 19:47:13.296898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 [2024-10-13 19:47:13.345053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.554 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.812 [2024-10-13 19:47:13.393218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.812 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:23.813 "tick_rate": 2700000000, 00:17:23.813 "poll_groups": [ 00:17:23.813 { 00:17:23.813 "name": "nvmf_tgt_poll_group_000", 00:17:23.813 "admin_qpairs": 2, 00:17:23.813 "io_qpairs": 84, 00:17:23.813 "current_admin_qpairs": 0, 00:17:23.813 "current_io_qpairs": 0, 00:17:23.813 "pending_bdev_io": 0, 00:17:23.813 "completed_nvme_io": 139, 00:17:23.813 "transports": [ 00:17:23.813 { 00:17:23.813 "trtype": "TCP" 00:17:23.813 } 00:17:23.813 ] 00:17:23.813 }, 00:17:23.813 { 00:17:23.813 "name": "nvmf_tgt_poll_group_001", 00:17:23.813 "admin_qpairs": 2, 00:17:23.813 "io_qpairs": 84, 00:17:23.813 "current_admin_qpairs": 0, 00:17:23.813 "current_io_qpairs": 0, 00:17:23.813 "pending_bdev_io": 0, 00:17:23.813 "completed_nvme_io": 220, 00:17:23.813 "transports": [ 00:17:23.813 { 00:17:23.813 "trtype": "TCP" 00:17:23.813 } 00:17:23.813 ] 00:17:23.813 }, 00:17:23.813 { 00:17:23.813 "name": "nvmf_tgt_poll_group_002", 00:17:23.813 "admin_qpairs": 1, 00:17:23.813 "io_qpairs": 84, 00:17:23.813 "current_admin_qpairs": 0, 00:17:23.813 "current_io_qpairs": 0, 00:17:23.813 "pending_bdev_io": 0, 00:17:23.813 "completed_nvme_io": 136, 00:17:23.813 "transports": [ 00:17:23.813 { 00:17:23.813 "trtype": "TCP" 00:17:23.813 } 00:17:23.813 ] 00:17:23.813 }, 00:17:23.813 { 00:17:23.813 "name": "nvmf_tgt_poll_group_003", 00:17:23.813 "admin_qpairs": 2, 00:17:23.813 "io_qpairs": 84, 00:17:23.813 "current_admin_qpairs": 0, 00:17:23.813 "current_io_qpairs": 0, 00:17:23.813 "pending_bdev_io": 0, 00:17:23.813 "completed_nvme_io": 191, 00:17:23.813 "transports": [ 00:17:23.813 { 00:17:23.813 "trtype": "TCP" 00:17:23.813 } 00:17:23.813 ] 00:17:23.813 } 00:17:23.813 ] 00:17:23.813 }' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.813 rmmod nvme_tcp 00:17:23.813 rmmod nvme_fabrics 00:17:23.813 rmmod nvme_keyring 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 2962369 ']' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 2962369 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2962369 ']' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2962369 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:23.813 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2962369 00:17:24.071 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.071 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.071 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2962369' 00:17:24.071 killing process with pid 2962369 00:17:24.071 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2962369 00:17:24.071 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2962369 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.446 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.350 19:47:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.350 00:17:27.350 real 0m27.642s 00:17:27.350 user 1m29.519s 00:17:27.350 sys 0m4.491s 00:17:27.350 19:47:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.350 19:47:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.350 ************************************ 00:17:27.350 END TEST nvmf_rpc 00:17:27.350 ************************************ 00:17:27.350 19:47:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:27.350 19:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:27.350 19:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.350 19:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.350 ************************************ 00:17:27.350 START TEST nvmf_invalid 00:17:27.350 ************************************ 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:27.350 * Looking for test storage... 00:17:27.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.350 --rc genhtml_branch_coverage=1 00:17:27.350 --rc genhtml_function_coverage=1 00:17:27.350 --rc genhtml_legend=1 00:17:27.350 --rc geninfo_all_blocks=1 00:17:27.350 --rc geninfo_unexecuted_blocks=1 00:17:27.350 00:17:27.350 ' 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.350 --rc genhtml_branch_coverage=1 00:17:27.350 --rc genhtml_function_coverage=1 00:17:27.350 --rc genhtml_legend=1 00:17:27.350 --rc geninfo_all_blocks=1 00:17:27.350 --rc geninfo_unexecuted_blocks=1 00:17:27.350 00:17:27.350 ' 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.350 --rc genhtml_branch_coverage=1 00:17:27.350 --rc genhtml_function_coverage=1 00:17:27.350 --rc genhtml_legend=1 00:17:27.350 --rc geninfo_all_blocks=1 00:17:27.350 --rc geninfo_unexecuted_blocks=1 00:17:27.350 00:17:27.350 ' 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.350 --rc genhtml_branch_coverage=1 00:17:27.350 --rc genhtml_function_coverage=1 00:17:27.350 --rc genhtml_legend=1 00:17:27.350 --rc geninfo_all_blocks=1 00:17:27.350 --rc geninfo_unexecuted_blocks=1 00:17:27.350 00:17:27.350 ' 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.350 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.609 19:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:29.511 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:29.511 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:29.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:29.511 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:29.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:29.512 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:29.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:17:29.770 00:17:29.770 --- 10.0.0.2 ping statistics --- 00:17:29.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.770 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:17:29.770 00:17:29.770 --- 10.0.0.1 ping statistics --- 00:17:29.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.770 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=2967130 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 2967130 00:17:29.770 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2967130 ']' 00:17:29.771 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.771 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.771 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.771 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.771 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:29.771 [2024-10-13 19:47:19.460808] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:29.771 [2024-10-13 19:47:19.460945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.029 [2024-10-13 19:47:19.599583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.029 [2024-10-13 19:47:19.732990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.029 [2024-10-13 19:47:19.733079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.029 [2024-10-13 19:47:19.733105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.029 [2024-10-13 19:47:19.733129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.029 [2024-10-13 19:47:19.733148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.029 [2024-10-13 19:47:19.736006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.029 [2024-10-13 19:47:19.736077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.029 [2024-10-13 19:47:19.736174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.029 [2024-10-13 19:47:19.736181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2090 00:17:30.962 [2024-10-13 19:47:20.747618] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:30.962 { 00:17:30.962 "nqn": "nqn.2016-06.io.spdk:cnode2090", 00:17:30.962 "tgt_name": "foobar", 00:17:30.962 "method": "nvmf_create_subsystem", 00:17:30.962 "req_id": 1 00:17:30.962 } 00:17:30.962 Got JSON-RPC error response 00:17:30.962 response: 00:17:30.962 { 00:17:30.962 "code": -32603, 00:17:30.962 "message": "Unable to find target foobar" 00:17:30.962 }' 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:30.962 { 00:17:30.962 "nqn": "nqn.2016-06.io.spdk:cnode2090", 00:17:30.962 "tgt_name": "foobar", 00:17:30.962 "method": "nvmf_create_subsystem", 00:17:30.962 "req_id": 1 00:17:30.962 } 00:17:30.962 Got JSON-RPC error response 00:17:30.962 response: 00:17:30.962 { 00:17:30.962 "code": -32603, 00:17:30.962 "message": "Unable to find target foobar" 00:17:30.962 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:30.962 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8012 00:17:31.220 [2024-10-13 19:47:21.016616] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8012: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:31.479 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:31.479 { 00:17:31.479 "nqn": "nqn.2016-06.io.spdk:cnode8012", 00:17:31.479 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:31.479 "method": "nvmf_create_subsystem", 00:17:31.479 "req_id": 1 00:17:31.479 } 00:17:31.479 Got JSON-RPC error response 00:17:31.479 response: 00:17:31.479 { 00:17:31.479 "code": -32602, 00:17:31.479 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:31.479 }' 00:17:31.479 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:31.479 { 00:17:31.479 "nqn": "nqn.2016-06.io.spdk:cnode8012", 00:17:31.479 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:31.479 "method": "nvmf_create_subsystem", 00:17:31.479 "req_id": 1 00:17:31.479 } 00:17:31.479 Got JSON-RPC error response 00:17:31.479 response: 00:17:31.479 { 00:17:31.479 "code": -32602, 00:17:31.479 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:31.479 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:31.479 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:31.479 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23008 00:17:31.737 [2024-10-13 19:47:21.349809] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23008: invalid model number 'SPDK_Controller' 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:31.737 { 00:17:31.737 "nqn": "nqn.2016-06.io.spdk:cnode23008", 00:17:31.737 "model_number": "SPDK_Controller\u001f", 00:17:31.737 "method": "nvmf_create_subsystem", 00:17:31.737 "req_id": 1 00:17:31.737 } 00:17:31.737 Got JSON-RPC error response 00:17:31.737 response: 00:17:31.737 { 00:17:31.737 "code": -32602, 00:17:31.737 "message": "Invalid MN SPDK_Controller\u001f" 00:17:31.737 }' 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:31.737 { 00:17:31.737 "nqn": "nqn.2016-06.io.spdk:cnode23008", 00:17:31.737 "model_number": "SPDK_Controller\u001f", 00:17:31.737 "method": "nvmf_create_subsystem", 00:17:31.737 "req_id": 1 00:17:31.737 } 00:17:31.737 Got JSON-RPC error response 00:17:31.737 response: 00:17:31.737 { 00:17:31.737 "code": -32602, 00:17:31.737 "message": "Invalid MN SPDK_Controller\u001f" 00:17:31.737 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:31.737 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ o == \- ]] 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'o9@u"Ugy5k_}4ZH_{C' 00:17:31.738 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'o9@u"Ugy5k_}4ZH_{C' nqn.2016-06.io.spdk:cnode29407 00:17:31.996 [2024-10-13 19:47:21.723025] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29407: invalid serial number 'o9@u"Ugy5k_}4ZH_{C' 00:17:31.996 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:31.996 { 00:17:31.996 "nqn": "nqn.2016-06.io.spdk:cnode29407", 00:17:31.996 "serial_number": "o9@u\"Ugy5k_}4ZH_{C", 00:17:31.996 "method": "nvmf_create_subsystem", 00:17:31.996 "req_id": 1 00:17:31.996 } 00:17:31.996 Got JSON-RPC error response 00:17:31.996 response: 00:17:31.996 { 00:17:31.996 "code": -32602, 00:17:31.996 "message": "Invalid SN o9@u\"Ugy5k_}4ZH_{C" 00:17:31.996 }' 00:17:31.996 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:31.996 { 00:17:31.996 "nqn": "nqn.2016-06.io.spdk:cnode29407", 00:17:31.996 "serial_number": "o9@u\"Ugy5k_}4ZH_{C", 00:17:31.996 "method": "nvmf_create_subsystem", 00:17:31.996 "req_id": 1 00:17:31.996 } 00:17:31.996 Got JSON-RPC error response 00:17:31.996 response: 00:17:31.996 { 00:17:31.996 "code": -32602, 00:17:31.996 "message": "Invalid SN o9@u\"Ugy5k_}4ZH_{C" 00:17:31.996 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:31.996 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:31.996 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.997 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:32.256 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '3|1mO^:^Bp*jKD4}c\Xi*W*M4A.z5gB='\''d)4p}' 00:17:32.257 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '3|1mO^:^Bp*jKD4}c\Xi*W*M4A.z5gB='\''d)4p}' nqn.2016-06.io.spdk:cnode7240 00:17:32.515 [2024-10-13 19:47:22.204643] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7240: invalid model number '3|1mO^:^Bp*jKD4}c\Xi*W*M4A.z5gB='d)4p}' 00:17:32.515 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:32.515 { 00:17:32.515 "nqn": "nqn.2016-06.io.spdk:cnode7240", 00:17:32.515 "model_number": "3|\u007f1mO^:^Bp*jKD4}c\\Xi*W*M4A.z5gB\u007f='\''d)4p\u007f}", 00:17:32.515 "method": "nvmf_create_subsystem", 00:17:32.515 "req_id": 1 00:17:32.515 } 00:17:32.515 Got JSON-RPC error response 00:17:32.515 response: 00:17:32.515 { 00:17:32.515 "code": -32602, 00:17:32.515 "message": "Invalid MN 3|\u007f1mO^:^Bp*jKD4}c\\Xi*W*M4A.z5gB\u007f='\''d)4p\u007f}" 00:17:32.515 }' 00:17:32.515 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:32.515 { 00:17:32.515 "nqn": "nqn.2016-06.io.spdk:cnode7240", 00:17:32.515 "model_number": "3|\u007f1mO^:^Bp*jKD4}c\\Xi*W*M4A.z5gB\u007f='d)4p\u007f}", 00:17:32.515 "method": "nvmf_create_subsystem", 00:17:32.515 "req_id": 1 00:17:32.515 } 00:17:32.515 Got JSON-RPC error response 00:17:32.515 response: 00:17:32.515 { 00:17:32.515 "code": -32602, 00:17:32.515 "message": "Invalid MN 3|\u007f1mO^:^Bp*jKD4}c\\Xi*W*M4A.z5gB\u007f='d)4p\u007f}" 00:17:32.515 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:32.515 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:32.773 [2024-10-13 19:47:22.493760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.773 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:33.031 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:33.031 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:33.031 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:33.031 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:33.031 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:33.288 [2024-10-13 19:47:23.057058] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:33.288 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:33.288 { 00:17:33.288 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:33.288 "listen_address": { 00:17:33.288 "trtype": "tcp", 00:17:33.288 "traddr": "", 00:17:33.288 "trsvcid": "4421" 00:17:33.288 }, 00:17:33.288 "method": "nvmf_subsystem_remove_listener", 00:17:33.288 "req_id": 1 00:17:33.289 } 00:17:33.289 Got JSON-RPC error response 00:17:33.289 response: 00:17:33.289 { 00:17:33.289 "code": -32602, 00:17:33.289 "message": "Invalid parameters" 00:17:33.289 }' 00:17:33.289 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:33.289 { 00:17:33.289 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:33.289 "listen_address": { 00:17:33.289 "trtype": "tcp", 00:17:33.289 "traddr": "", 00:17:33.289 "trsvcid": "4421" 00:17:33.289 }, 00:17:33.289 "method": "nvmf_subsystem_remove_listener", 00:17:33.289 "req_id": 1 00:17:33.289 } 00:17:33.289 Got JSON-RPC error response 00:17:33.289 response: 00:17:33.289 { 00:17:33.289 "code": -32602, 00:17:33.289 "message": "Invalid parameters" 00:17:33.289 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:33.289 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29066 -i 0 00:17:33.546 [2024-10-13 19:47:23.325946] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29066: invalid cntlid range [0-65519] 00:17:33.546 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:33.546 { 00:17:33.546 "nqn": "nqn.2016-06.io.spdk:cnode29066", 00:17:33.546 "min_cntlid": 0, 00:17:33.546 "method": "nvmf_create_subsystem", 00:17:33.546 "req_id": 1 00:17:33.546 } 00:17:33.546 Got JSON-RPC error response 00:17:33.546 response: 00:17:33.546 { 00:17:33.546 "code": -32602, 00:17:33.546 "message": "Invalid cntlid range [0-65519]" 00:17:33.546 }' 00:17:33.546 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:33.546 { 00:17:33.546 "nqn": "nqn.2016-06.io.spdk:cnode29066", 00:17:33.546 "min_cntlid": 0, 00:17:33.546 "method": "nvmf_create_subsystem", 00:17:33.546 "req_id": 1 00:17:33.546 } 00:17:33.546 Got JSON-RPC error response 00:17:33.546 response: 00:17:33.546 { 00:17:33.546 "code": -32602, 00:17:33.546 "message": "Invalid cntlid range [0-65519]" 00:17:33.546 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:33.546 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3260 -i 65520 00:17:33.804 [2024-10-13 19:47:23.606830] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3260: invalid cntlid range [65520-65519] 00:17:34.062 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:34.062 { 00:17:34.062 "nqn": "nqn.2016-06.io.spdk:cnode3260", 00:17:34.062 "min_cntlid": 65520, 00:17:34.062 "method": "nvmf_create_subsystem", 00:17:34.062 "req_id": 1 00:17:34.062 } 00:17:34.062 Got JSON-RPC error response 00:17:34.062 response: 00:17:34.062 { 00:17:34.062 "code": -32602, 00:17:34.062 "message": "Invalid cntlid range [65520-65519]" 00:17:34.062 }' 00:17:34.062 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:34.062 { 00:17:34.062 "nqn": "nqn.2016-06.io.spdk:cnode3260", 00:17:34.062 "min_cntlid": 65520, 00:17:34.062 "method": "nvmf_create_subsystem", 00:17:34.062 "req_id": 1 00:17:34.062 } 00:17:34.062 Got JSON-RPC error response 00:17:34.062 response: 00:17:34.062 { 00:17:34.062 "code": -32602, 00:17:34.062 "message": "Invalid cntlid range [65520-65519]" 00:17:34.062 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:34.062 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28441 -I 0 00:17:34.320 [2024-10-13 19:47:23.887858] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28441: invalid cntlid range [1-0] 00:17:34.320 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:34.320 { 00:17:34.320 "nqn": "nqn.2016-06.io.spdk:cnode28441", 00:17:34.320 "max_cntlid": 0, 00:17:34.320 "method": "nvmf_create_subsystem", 00:17:34.320 "req_id": 1 00:17:34.320 } 00:17:34.320 Got JSON-RPC error response 00:17:34.320 response: 00:17:34.320 { 00:17:34.320 "code": -32602, 00:17:34.320 "message": "Invalid cntlid range [1-0]" 00:17:34.320 }' 00:17:34.320 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:34.320 { 00:17:34.320 "nqn": "nqn.2016-06.io.spdk:cnode28441", 00:17:34.320 "max_cntlid": 0, 00:17:34.320 "method": "nvmf_create_subsystem", 00:17:34.320 "req_id": 1 00:17:34.320 } 00:17:34.320 Got JSON-RPC error response 00:17:34.320 response: 00:17:34.320 { 00:17:34.320 "code": -32602, 00:17:34.320 "message": "Invalid cntlid range [1-0]" 00:17:34.320 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:34.320 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24302 -I 65520 00:17:34.578 [2024-10-13 19:47:24.156737] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24302: invalid cntlid range [1-65520] 00:17:34.578 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:34.578 { 00:17:34.578 "nqn": "nqn.2016-06.io.spdk:cnode24302", 00:17:34.578 "max_cntlid": 65520, 00:17:34.578 "method": "nvmf_create_subsystem", 00:17:34.578 "req_id": 1 00:17:34.578 } 00:17:34.578 Got JSON-RPC error response 00:17:34.578 response: 00:17:34.578 { 00:17:34.578 "code": -32602, 00:17:34.578 "message": "Invalid cntlid range [1-65520]" 00:17:34.578 }' 00:17:34.578 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:34.578 { 00:17:34.578 "nqn": "nqn.2016-06.io.spdk:cnode24302", 00:17:34.578 "max_cntlid": 65520, 00:17:34.578 "method": "nvmf_create_subsystem", 00:17:34.578 "req_id": 1 00:17:34.578 } 00:17:34.578 Got JSON-RPC error response 00:17:34.578 response: 00:17:34.578 { 00:17:34.578 "code": -32602, 00:17:34.578 "message": "Invalid cntlid range [1-65520]" 00:17:34.578 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:34.578 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6893 -i 6 -I 5 00:17:34.836 [2024-10-13 19:47:24.421676] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6893: invalid cntlid range [6-5] 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:34.836 { 00:17:34.836 "nqn": "nqn.2016-06.io.spdk:cnode6893", 00:17:34.836 "min_cntlid": 6, 00:17:34.836 "max_cntlid": 5, 00:17:34.836 "method": "nvmf_create_subsystem", 00:17:34.836 "req_id": 1 00:17:34.836 } 00:17:34.836 Got JSON-RPC error response 00:17:34.836 response: 00:17:34.836 { 00:17:34.836 "code": -32602, 00:17:34.836 "message": "Invalid cntlid range [6-5]" 00:17:34.836 }' 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:34.836 { 00:17:34.836 "nqn": "nqn.2016-06.io.spdk:cnode6893", 00:17:34.836 "min_cntlid": 6, 00:17:34.836 "max_cntlid": 5, 00:17:34.836 "method": "nvmf_create_subsystem", 00:17:34.836 "req_id": 1 00:17:34.836 } 00:17:34.836 Got JSON-RPC error response 00:17:34.836 response: 00:17:34.836 { 00:17:34.836 "code": -32602, 00:17:34.836 "message": "Invalid cntlid range [6-5]" 00:17:34.836 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:34.836 { 00:17:34.836 "name": "foobar", 00:17:34.836 "method": "nvmf_delete_target", 00:17:34.836 "req_id": 1 00:17:34.836 } 00:17:34.836 Got JSON-RPC error response 00:17:34.836 response: 00:17:34.836 { 00:17:34.836 "code": -32602, 00:17:34.836 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:34.836 }' 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:34.836 { 00:17:34.836 "name": "foobar", 00:17:34.836 "method": "nvmf_delete_target", 00:17:34.836 "req_id": 1 00:17:34.836 } 00:17:34.836 Got JSON-RPC error response 00:17:34.836 response: 00:17:34.836 { 00:17:34.836 "code": -32602, 00:17:34.836 "message": "The specified target doesn't exist, cannot delete it." 00:17:34.836 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.836 rmmod nvme_tcp 00:17:34.836 rmmod nvme_fabrics 00:17:34.836 rmmod nvme_keyring 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 2967130 ']' 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 2967130 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2967130 ']' 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2967130 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.836 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2967130 00:17:35.094 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:35.094 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:35.094 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2967130' 00:17:35.094 killing process with pid 2967130 00:17:35.094 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2967130 00:17:35.094 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2967130 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.030 19:47:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:38.565 00:17:38.565 real 0m10.783s 00:17:38.565 user 0m27.732s 00:17:38.565 sys 0m2.672s 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:38.565 ************************************ 00:17:38.565 END TEST nvmf_invalid 00:17:38.565 ************************************ 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.565 ************************************ 00:17:38.565 START TEST nvmf_connect_stress 00:17:38.565 ************************************ 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:38.565 * Looking for test storage... 00:17:38.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:38.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.565 --rc genhtml_branch_coverage=1 00:17:38.565 --rc genhtml_function_coverage=1 00:17:38.565 --rc genhtml_legend=1 00:17:38.565 --rc geninfo_all_blocks=1 00:17:38.565 --rc geninfo_unexecuted_blocks=1 00:17:38.565 00:17:38.565 ' 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:38.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.565 --rc genhtml_branch_coverage=1 00:17:38.565 --rc genhtml_function_coverage=1 00:17:38.565 --rc genhtml_legend=1 00:17:38.565 --rc geninfo_all_blocks=1 00:17:38.565 --rc geninfo_unexecuted_blocks=1 00:17:38.565 00:17:38.565 ' 00:17:38.565 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:38.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.565 --rc genhtml_branch_coverage=1 00:17:38.566 --rc genhtml_function_coverage=1 00:17:38.566 --rc genhtml_legend=1 00:17:38.566 --rc geninfo_all_blocks=1 00:17:38.566 --rc geninfo_unexecuted_blocks=1 00:17:38.566 00:17:38.566 ' 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:38.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.566 --rc genhtml_branch_coverage=1 00:17:38.566 --rc genhtml_function_coverage=1 00:17:38.566 --rc genhtml_legend=1 00:17:38.566 --rc geninfo_all_blocks=1 00:17:38.566 --rc geninfo_unexecuted_blocks=1 00:17:38.566 00:17:38.566 ' 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.566 19:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:38.566 19:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.468 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:40.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:40.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:40.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:40.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:40.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:17:40.469 00:17:40.469 --- 10.0.0.2 ping statistics --- 00:17:40.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.469 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:17:40.469 00:17:40.469 --- 10.0.0.1 ping statistics --- 00:17:40.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.469 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:40.469 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=2969997 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 2969997 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2969997 ']' 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.470 19:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.728 [2024-10-13 19:47:30.300783] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:40.728 [2024-10-13 19:47:30.300946] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.728 [2024-10-13 19:47:30.459798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.986 [2024-10-13 19:47:30.603636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.986 [2024-10-13 19:47:30.603723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.986 [2024-10-13 19:47:30.603749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.986 [2024-10-13 19:47:30.603773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.986 [2024-10-13 19:47:30.603792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.986 [2024-10-13 19:47:30.606437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.986 [2024-10-13 19:47:30.606495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.986 [2024-10-13 19:47:30.606499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.552 [2024-10-13 19:47:31.264423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.552 [2024-10-13 19:47:31.284661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.552 NULL1 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2970110 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.552 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.553 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.183 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.183 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:42.183 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.183 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.183 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.457 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.457 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:42.457 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.457 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.457 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.715 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.715 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:42.715 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.715 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.715 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.973 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.973 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:42.973 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.973 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.973 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.230 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.230 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:43.230 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.230 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.230 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.488 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.745 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:43.745 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.745 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.745 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.003 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.003 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:44.003 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.003 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.003 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.260 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.260 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:44.260 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.260 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.260 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.518 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.518 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:44.518 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.518 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.518 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.083 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.083 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:45.083 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.083 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.083 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.340 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.340 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:45.340 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.340 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.340 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.598 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.598 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:45.598 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.598 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.598 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.855 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.855 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:45.855 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.855 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.855 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.113 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.113 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:46.113 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.113 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.113 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.678 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.678 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:46.678 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.678 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.678 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.936 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.936 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:46.936 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.936 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.936 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.193 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.193 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:47.193 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.193 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.193 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.451 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.451 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:47.451 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.451 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.451 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.709 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.709 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:47.709 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.709 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.709 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.274 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.274 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:48.274 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.274 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.274 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.531 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.531 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:48.531 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.531 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.531 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.788 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.788 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:48.788 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.788 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.788 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.046 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.046 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:49.046 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.046 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.046 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.612 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.612 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:49.612 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.612 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.612 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.869 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.869 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:49.869 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.869 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.869 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.127 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.127 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:50.127 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.127 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.127 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.385 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.385 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:50.385 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.385 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.385 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.643 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.643 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:50.643 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.643 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.643 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.208 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.208 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:51.208 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.208 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.208 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.466 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.466 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:51.466 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.466 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.466 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.724 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.724 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:51.724 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.724 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.724 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.724 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2970110 00:17:51.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2970110) - No such process 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2970110 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.981 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.981 rmmod nvme_tcp 00:17:51.981 rmmod nvme_fabrics 00:17:51.981 rmmod nvme_keyring 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 2969997 ']' 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 2969997 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2969997 ']' 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2969997 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2969997 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2969997' 00:17:52.239 killing process with pid 2969997 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2969997 00:17:52.239 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2969997 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.173 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.707 19:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:55.707 00:17:55.707 real 0m17.131s 00:17:55.707 user 0m42.799s 00:17:55.707 sys 0m5.972s 00:17:55.707 19:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.707 19:47:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.707 ************************************ 00:17:55.707 END TEST nvmf_connect_stress 00:17:55.707 ************************************ 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.707 ************************************ 00:17:55.707 START TEST nvmf_fused_ordering 00:17:55.707 ************************************ 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:55.707 * Looking for test storage... 00:17:55.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:55.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.707 --rc genhtml_branch_coverage=1 00:17:55.707 --rc genhtml_function_coverage=1 00:17:55.707 --rc genhtml_legend=1 00:17:55.707 --rc geninfo_all_blocks=1 00:17:55.707 --rc geninfo_unexecuted_blocks=1 00:17:55.707 00:17:55.707 ' 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:55.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.707 --rc genhtml_branch_coverage=1 00:17:55.707 --rc genhtml_function_coverage=1 00:17:55.707 --rc genhtml_legend=1 00:17:55.707 --rc geninfo_all_blocks=1 00:17:55.707 --rc geninfo_unexecuted_blocks=1 00:17:55.707 00:17:55.707 ' 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:55.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.707 --rc genhtml_branch_coverage=1 00:17:55.707 --rc genhtml_function_coverage=1 00:17:55.707 --rc genhtml_legend=1 00:17:55.707 --rc geninfo_all_blocks=1 00:17:55.707 --rc geninfo_unexecuted_blocks=1 00:17:55.707 00:17:55.707 ' 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:55.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.707 --rc genhtml_branch_coverage=1 00:17:55.707 --rc genhtml_function_coverage=1 00:17:55.707 --rc genhtml_legend=1 00:17:55.707 --rc geninfo_all_blocks=1 00:17:55.707 --rc geninfo_unexecuted_blocks=1 00:17:55.707 00:17:55.707 ' 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.707 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:55.708 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.611 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.612 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.612 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.612 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.612 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:57.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:17:57.612 00:17:57.612 --- 10.0.0.2 ping statistics --- 00:17:57.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.612 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:17:57.612 00:17:57.612 --- 10.0.0.1 ping statistics --- 00:17:57.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.612 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:57.612 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=2973460 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 2973460 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2973460 ']' 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.871 19:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:57.871 [2024-10-13 19:47:47.537117] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:57.871 [2024-10-13 19:47:47.537263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.871 [2024-10-13 19:47:47.679267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.129 [2024-10-13 19:47:47.800996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.129 [2024-10-13 19:47:47.801073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.129 [2024-10-13 19:47:47.801093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.129 [2024-10-13 19:47:47.801113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.129 [2024-10-13 19:47:47.801128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.129 [2024-10-13 19:47:47.802548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.063 [2024-10-13 19:47:48.563825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.063 [2024-10-13 19:47:48.580106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.063 NULL1 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.063 19:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:59.063 [2024-10-13 19:47:48.655966] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:59.063 [2024-10-13 19:47:48.656075] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2973612 ] 00:17:59.636 Attached to nqn.2016-06.io.spdk:cnode1 00:17:59.636 Namespace ID: 1 size: 1GB 00:17:59.636 fused_ordering(0) 00:17:59.636 fused_ordering(1) 00:17:59.636 fused_ordering(2) 00:17:59.636 fused_ordering(3) 00:17:59.636 fused_ordering(4) 00:17:59.636 fused_ordering(5) 00:17:59.636 fused_ordering(6) 00:17:59.636 fused_ordering(7) 00:17:59.636 fused_ordering(8) 00:17:59.636 fused_ordering(9) 00:17:59.636 fused_ordering(10) 00:17:59.636 fused_ordering(11) 00:17:59.636 fused_ordering(12) 00:17:59.636 fused_ordering(13) 00:17:59.636 fused_ordering(14) 00:17:59.636 fused_ordering(15) 00:17:59.636 fused_ordering(16) 00:17:59.636 fused_ordering(17) 00:17:59.636 fused_ordering(18) 00:17:59.636 fused_ordering(19) 00:17:59.636 fused_ordering(20) 00:17:59.636 fused_ordering(21) 00:17:59.636 fused_ordering(22) 00:17:59.636 fused_ordering(23) 00:17:59.636 fused_ordering(24) 00:17:59.636 fused_ordering(25) 00:17:59.636 fused_ordering(26) 00:17:59.636 fused_ordering(27) 00:17:59.636 fused_ordering(28) 00:17:59.636 fused_ordering(29) 00:17:59.636 fused_ordering(30) 00:17:59.636 fused_ordering(31) 00:17:59.636 fused_ordering(32) 00:17:59.636 fused_ordering(33) 00:17:59.636 fused_ordering(34) 00:17:59.636 fused_ordering(35) 00:17:59.636 fused_ordering(36) 00:17:59.636 fused_ordering(37) 00:17:59.636 fused_ordering(38) 00:17:59.636 fused_ordering(39) 00:17:59.636 fused_ordering(40) 00:17:59.636 fused_ordering(41) 00:17:59.636 fused_ordering(42) 00:17:59.636 fused_ordering(43) 00:17:59.636 fused_ordering(44) 00:17:59.636 fused_ordering(45) 00:17:59.636 fused_ordering(46) 00:17:59.636 fused_ordering(47) 00:17:59.636 fused_ordering(48) 00:17:59.636 fused_ordering(49) 00:17:59.636 fused_ordering(50) 00:17:59.636 fused_ordering(51) 00:17:59.636 fused_ordering(52) 00:17:59.636 fused_ordering(53) 00:17:59.636 fused_ordering(54) 00:17:59.636 fused_ordering(55) 00:17:59.636 fused_ordering(56) 00:17:59.636 fused_ordering(57) 00:17:59.636 fused_ordering(58) 00:17:59.636 fused_ordering(59) 00:17:59.636 fused_ordering(60) 00:17:59.636 fused_ordering(61) 00:17:59.636 fused_ordering(62) 00:17:59.636 fused_ordering(63) 00:17:59.636 fused_ordering(64) 00:17:59.636 fused_ordering(65) 00:17:59.636 fused_ordering(66) 00:17:59.636 fused_ordering(67) 00:17:59.636 fused_ordering(68) 00:17:59.636 fused_ordering(69) 00:17:59.636 fused_ordering(70) 00:17:59.636 fused_ordering(71) 00:17:59.636 fused_ordering(72) 00:17:59.636 fused_ordering(73) 00:17:59.636 fused_ordering(74) 00:17:59.636 fused_ordering(75) 00:17:59.636 fused_ordering(76) 00:17:59.636 fused_ordering(77) 00:17:59.636 fused_ordering(78) 00:17:59.636 fused_ordering(79) 00:17:59.636 fused_ordering(80) 00:17:59.636 fused_ordering(81) 00:17:59.636 fused_ordering(82) 00:17:59.636 fused_ordering(83) 00:17:59.636 fused_ordering(84) 00:17:59.636 fused_ordering(85) 00:17:59.636 fused_ordering(86) 00:17:59.636 fused_ordering(87) 00:17:59.636 fused_ordering(88) 00:17:59.636 fused_ordering(89) 00:17:59.636 fused_ordering(90) 00:17:59.636 fused_ordering(91) 00:17:59.636 fused_ordering(92) 00:17:59.637 fused_ordering(93) 00:17:59.637 fused_ordering(94) 00:17:59.637 fused_ordering(95) 00:17:59.637 fused_ordering(96) 00:17:59.637 fused_ordering(97) 00:17:59.637 fused_ordering(98) 00:17:59.637 fused_ordering(99) 00:17:59.637 fused_ordering(100) 00:17:59.637 fused_ordering(101) 00:17:59.637 fused_ordering(102) 00:17:59.637 fused_ordering(103) 00:17:59.637 fused_ordering(104) 00:17:59.637 fused_ordering(105) 00:17:59.637 fused_ordering(106) 00:17:59.637 fused_ordering(107) 00:17:59.637 fused_ordering(108) 00:17:59.637 fused_ordering(109) 00:17:59.637 fused_ordering(110) 00:17:59.637 fused_ordering(111) 00:17:59.637 fused_ordering(112) 00:17:59.637 fused_ordering(113) 00:17:59.637 fused_ordering(114) 00:17:59.637 fused_ordering(115) 00:17:59.637 fused_ordering(116) 00:17:59.637 fused_ordering(117) 00:17:59.637 fused_ordering(118) 00:17:59.637 fused_ordering(119) 00:17:59.637 fused_ordering(120) 00:17:59.637 fused_ordering(121) 00:17:59.637 fused_ordering(122) 00:17:59.637 fused_ordering(123) 00:17:59.637 fused_ordering(124) 00:17:59.637 fused_ordering(125) 00:17:59.637 fused_ordering(126) 00:17:59.637 fused_ordering(127) 00:17:59.637 fused_ordering(128) 00:17:59.637 fused_ordering(129) 00:17:59.637 fused_ordering(130) 00:17:59.637 fused_ordering(131) 00:17:59.637 fused_ordering(132) 00:17:59.637 fused_ordering(133) 00:17:59.637 fused_ordering(134) 00:17:59.637 fused_ordering(135) 00:17:59.637 fused_ordering(136) 00:17:59.637 fused_ordering(137) 00:17:59.637 fused_ordering(138) 00:17:59.637 fused_ordering(139) 00:17:59.637 fused_ordering(140) 00:17:59.637 fused_ordering(141) 00:17:59.637 fused_ordering(142) 00:17:59.637 fused_ordering(143) 00:17:59.637 fused_ordering(144) 00:17:59.637 fused_ordering(145) 00:17:59.637 fused_ordering(146) 00:17:59.637 fused_ordering(147) 00:17:59.637 fused_ordering(148) 00:17:59.637 fused_ordering(149) 00:17:59.637 fused_ordering(150) 00:17:59.637 fused_ordering(151) 00:17:59.637 fused_ordering(152) 00:17:59.637 fused_ordering(153) 00:17:59.637 fused_ordering(154) 00:17:59.637 fused_ordering(155) 00:17:59.637 fused_ordering(156) 00:17:59.637 fused_ordering(157) 00:17:59.637 fused_ordering(158) 00:17:59.637 fused_ordering(159) 00:17:59.637 fused_ordering(160) 00:17:59.637 fused_ordering(161) 00:17:59.637 fused_ordering(162) 00:17:59.637 fused_ordering(163) 00:17:59.637 fused_ordering(164) 00:17:59.637 fused_ordering(165) 00:17:59.637 fused_ordering(166) 00:17:59.637 fused_ordering(167) 00:17:59.637 fused_ordering(168) 00:17:59.637 fused_ordering(169) 00:17:59.637 fused_ordering(170) 00:17:59.637 fused_ordering(171) 00:17:59.637 fused_ordering(172) 00:17:59.637 fused_ordering(173) 00:17:59.637 fused_ordering(174) 00:17:59.637 fused_ordering(175) 00:17:59.637 fused_ordering(176) 00:17:59.637 fused_ordering(177) 00:17:59.637 fused_ordering(178) 00:17:59.637 fused_ordering(179) 00:17:59.637 fused_ordering(180) 00:17:59.637 fused_ordering(181) 00:17:59.637 fused_ordering(182) 00:17:59.637 fused_ordering(183) 00:17:59.637 fused_ordering(184) 00:17:59.637 fused_ordering(185) 00:17:59.637 fused_ordering(186) 00:17:59.637 fused_ordering(187) 00:17:59.637 fused_ordering(188) 00:17:59.637 fused_ordering(189) 00:17:59.637 fused_ordering(190) 00:17:59.637 fused_ordering(191) 00:17:59.637 fused_ordering(192) 00:17:59.637 fused_ordering(193) 00:17:59.637 fused_ordering(194) 00:17:59.637 fused_ordering(195) 00:17:59.637 fused_ordering(196) 00:17:59.637 fused_ordering(197) 00:17:59.637 fused_ordering(198) 00:17:59.637 fused_ordering(199) 00:17:59.637 fused_ordering(200) 00:17:59.637 fused_ordering(201) 00:17:59.637 fused_ordering(202) 00:17:59.637 fused_ordering(203) 00:17:59.637 fused_ordering(204) 00:17:59.637 fused_ordering(205) 00:18:00.203 fused_ordering(206) 00:18:00.203 fused_ordering(207) 00:18:00.203 fused_ordering(208) 00:18:00.203 fused_ordering(209) 00:18:00.203 fused_ordering(210) 00:18:00.203 fused_ordering(211) 00:18:00.203 fused_ordering(212) 00:18:00.203 fused_ordering(213) 00:18:00.203 fused_ordering(214) 00:18:00.203 fused_ordering(215) 00:18:00.203 fused_ordering(216) 00:18:00.203 fused_ordering(217) 00:18:00.203 fused_ordering(218) 00:18:00.203 fused_ordering(219) 00:18:00.203 fused_ordering(220) 00:18:00.203 fused_ordering(221) 00:18:00.203 fused_ordering(222) 00:18:00.203 fused_ordering(223) 00:18:00.203 fused_ordering(224) 00:18:00.203 fused_ordering(225) 00:18:00.203 fused_ordering(226) 00:18:00.203 fused_ordering(227) 00:18:00.203 fused_ordering(228) 00:18:00.203 fused_ordering(229) 00:18:00.203 fused_ordering(230) 00:18:00.203 fused_ordering(231) 00:18:00.203 fused_ordering(232) 00:18:00.203 fused_ordering(233) 00:18:00.203 fused_ordering(234) 00:18:00.203 fused_ordering(235) 00:18:00.203 fused_ordering(236) 00:18:00.203 fused_ordering(237) 00:18:00.203 fused_ordering(238) 00:18:00.203 fused_ordering(239) 00:18:00.203 fused_ordering(240) 00:18:00.203 fused_ordering(241) 00:18:00.203 fused_ordering(242) 00:18:00.203 fused_ordering(243) 00:18:00.203 fused_ordering(244) 00:18:00.203 fused_ordering(245) 00:18:00.203 fused_ordering(246) 00:18:00.203 fused_ordering(247) 00:18:00.203 fused_ordering(248) 00:18:00.203 fused_ordering(249) 00:18:00.203 fused_ordering(250) 00:18:00.203 fused_ordering(251) 00:18:00.203 fused_ordering(252) 00:18:00.203 fused_ordering(253) 00:18:00.203 fused_ordering(254) 00:18:00.203 fused_ordering(255) 00:18:00.203 fused_ordering(256) 00:18:00.203 fused_ordering(257) 00:18:00.203 fused_ordering(258) 00:18:00.203 fused_ordering(259) 00:18:00.203 fused_ordering(260) 00:18:00.203 fused_ordering(261) 00:18:00.203 fused_ordering(262) 00:18:00.203 fused_ordering(263) 00:18:00.203 fused_ordering(264) 00:18:00.203 fused_ordering(265) 00:18:00.203 fused_ordering(266) 00:18:00.203 fused_ordering(267) 00:18:00.203 fused_ordering(268) 00:18:00.203 fused_ordering(269) 00:18:00.203 fused_ordering(270) 00:18:00.203 fused_ordering(271) 00:18:00.203 fused_ordering(272) 00:18:00.203 fused_ordering(273) 00:18:00.203 fused_ordering(274) 00:18:00.203 fused_ordering(275) 00:18:00.203 fused_ordering(276) 00:18:00.203 fused_ordering(277) 00:18:00.203 fused_ordering(278) 00:18:00.203 fused_ordering(279) 00:18:00.203 fused_ordering(280) 00:18:00.203 fused_ordering(281) 00:18:00.203 fused_ordering(282) 00:18:00.203 fused_ordering(283) 00:18:00.203 fused_ordering(284) 00:18:00.203 fused_ordering(285) 00:18:00.203 fused_ordering(286) 00:18:00.203 fused_ordering(287) 00:18:00.203 fused_ordering(288) 00:18:00.203 fused_ordering(289) 00:18:00.203 fused_ordering(290) 00:18:00.203 fused_ordering(291) 00:18:00.203 fused_ordering(292) 00:18:00.203 fused_ordering(293) 00:18:00.203 fused_ordering(294) 00:18:00.203 fused_ordering(295) 00:18:00.203 fused_ordering(296) 00:18:00.203 fused_ordering(297) 00:18:00.203 fused_ordering(298) 00:18:00.203 fused_ordering(299) 00:18:00.203 fused_ordering(300) 00:18:00.203 fused_ordering(301) 00:18:00.203 fused_ordering(302) 00:18:00.203 fused_ordering(303) 00:18:00.203 fused_ordering(304) 00:18:00.203 fused_ordering(305) 00:18:00.203 fused_ordering(306) 00:18:00.203 fused_ordering(307) 00:18:00.203 fused_ordering(308) 00:18:00.203 fused_ordering(309) 00:18:00.203 fused_ordering(310) 00:18:00.203 fused_ordering(311) 00:18:00.203 fused_ordering(312) 00:18:00.203 fused_ordering(313) 00:18:00.203 fused_ordering(314) 00:18:00.203 fused_ordering(315) 00:18:00.203 fused_ordering(316) 00:18:00.203 fused_ordering(317) 00:18:00.203 fused_ordering(318) 00:18:00.203 fused_ordering(319) 00:18:00.203 fused_ordering(320) 00:18:00.203 fused_ordering(321) 00:18:00.203 fused_ordering(322) 00:18:00.203 fused_ordering(323) 00:18:00.203 fused_ordering(324) 00:18:00.203 fused_ordering(325) 00:18:00.203 fused_ordering(326) 00:18:00.203 fused_ordering(327) 00:18:00.203 fused_ordering(328) 00:18:00.203 fused_ordering(329) 00:18:00.203 fused_ordering(330) 00:18:00.203 fused_ordering(331) 00:18:00.203 fused_ordering(332) 00:18:00.203 fused_ordering(333) 00:18:00.203 fused_ordering(334) 00:18:00.203 fused_ordering(335) 00:18:00.203 fused_ordering(336) 00:18:00.203 fused_ordering(337) 00:18:00.203 fused_ordering(338) 00:18:00.203 fused_ordering(339) 00:18:00.203 fused_ordering(340) 00:18:00.203 fused_ordering(341) 00:18:00.203 fused_ordering(342) 00:18:00.203 fused_ordering(343) 00:18:00.203 fused_ordering(344) 00:18:00.203 fused_ordering(345) 00:18:00.203 fused_ordering(346) 00:18:00.203 fused_ordering(347) 00:18:00.203 fused_ordering(348) 00:18:00.203 fused_ordering(349) 00:18:00.203 fused_ordering(350) 00:18:00.203 fused_ordering(351) 00:18:00.203 fused_ordering(352) 00:18:00.203 fused_ordering(353) 00:18:00.203 fused_ordering(354) 00:18:00.203 fused_ordering(355) 00:18:00.203 fused_ordering(356) 00:18:00.203 fused_ordering(357) 00:18:00.203 fused_ordering(358) 00:18:00.203 fused_ordering(359) 00:18:00.203 fused_ordering(360) 00:18:00.203 fused_ordering(361) 00:18:00.203 fused_ordering(362) 00:18:00.203 fused_ordering(363) 00:18:00.203 fused_ordering(364) 00:18:00.203 fused_ordering(365) 00:18:00.203 fused_ordering(366) 00:18:00.203 fused_ordering(367) 00:18:00.203 fused_ordering(368) 00:18:00.203 fused_ordering(369) 00:18:00.203 fused_ordering(370) 00:18:00.203 fused_ordering(371) 00:18:00.203 fused_ordering(372) 00:18:00.203 fused_ordering(373) 00:18:00.203 fused_ordering(374) 00:18:00.203 fused_ordering(375) 00:18:00.203 fused_ordering(376) 00:18:00.203 fused_ordering(377) 00:18:00.203 fused_ordering(378) 00:18:00.203 fused_ordering(379) 00:18:00.203 fused_ordering(380) 00:18:00.203 fused_ordering(381) 00:18:00.203 fused_ordering(382) 00:18:00.203 fused_ordering(383) 00:18:00.203 fused_ordering(384) 00:18:00.203 fused_ordering(385) 00:18:00.203 fused_ordering(386) 00:18:00.203 fused_ordering(387) 00:18:00.203 fused_ordering(388) 00:18:00.203 fused_ordering(389) 00:18:00.203 fused_ordering(390) 00:18:00.203 fused_ordering(391) 00:18:00.203 fused_ordering(392) 00:18:00.203 fused_ordering(393) 00:18:00.203 fused_ordering(394) 00:18:00.203 fused_ordering(395) 00:18:00.203 fused_ordering(396) 00:18:00.203 fused_ordering(397) 00:18:00.203 fused_ordering(398) 00:18:00.203 fused_ordering(399) 00:18:00.203 fused_ordering(400) 00:18:00.203 fused_ordering(401) 00:18:00.203 fused_ordering(402) 00:18:00.203 fused_ordering(403) 00:18:00.203 fused_ordering(404) 00:18:00.203 fused_ordering(405) 00:18:00.203 fused_ordering(406) 00:18:00.203 fused_ordering(407) 00:18:00.203 fused_ordering(408) 00:18:00.203 fused_ordering(409) 00:18:00.203 fused_ordering(410) 00:18:00.769 fused_ordering(411) 00:18:00.769 fused_ordering(412) 00:18:00.769 fused_ordering(413) 00:18:00.769 fused_ordering(414) 00:18:00.769 fused_ordering(415) 00:18:00.769 fused_ordering(416) 00:18:00.769 fused_ordering(417) 00:18:00.769 fused_ordering(418) 00:18:00.769 fused_ordering(419) 00:18:00.769 fused_ordering(420) 00:18:00.769 fused_ordering(421) 00:18:00.769 fused_ordering(422) 00:18:00.769 fused_ordering(423) 00:18:00.769 fused_ordering(424) 00:18:00.769 fused_ordering(425) 00:18:00.769 fused_ordering(426) 00:18:00.769 fused_ordering(427) 00:18:00.769 fused_ordering(428) 00:18:00.769 fused_ordering(429) 00:18:00.769 fused_ordering(430) 00:18:00.769 fused_ordering(431) 00:18:00.769 fused_ordering(432) 00:18:00.769 fused_ordering(433) 00:18:00.769 fused_ordering(434) 00:18:00.769 fused_ordering(435) 00:18:00.769 fused_ordering(436) 00:18:00.769 fused_ordering(437) 00:18:00.769 fused_ordering(438) 00:18:00.769 fused_ordering(439) 00:18:00.769 fused_ordering(440) 00:18:00.769 fused_ordering(441) 00:18:00.769 fused_ordering(442) 00:18:00.769 fused_ordering(443) 00:18:00.769 fused_ordering(444) 00:18:00.769 fused_ordering(445) 00:18:00.769 fused_ordering(446) 00:18:00.769 fused_ordering(447) 00:18:00.769 fused_ordering(448) 00:18:00.769 fused_ordering(449) 00:18:00.769 fused_ordering(450) 00:18:00.769 fused_ordering(451) 00:18:00.769 fused_ordering(452) 00:18:00.769 fused_ordering(453) 00:18:00.769 fused_ordering(454) 00:18:00.769 fused_ordering(455) 00:18:00.769 fused_ordering(456) 00:18:00.769 fused_ordering(457) 00:18:00.769 fused_ordering(458) 00:18:00.769 fused_ordering(459) 00:18:00.769 fused_ordering(460) 00:18:00.769 fused_ordering(461) 00:18:00.769 fused_ordering(462) 00:18:00.769 fused_ordering(463) 00:18:00.769 fused_ordering(464) 00:18:00.769 fused_ordering(465) 00:18:00.769 fused_ordering(466) 00:18:00.769 fused_ordering(467) 00:18:00.769 fused_ordering(468) 00:18:00.769 fused_ordering(469) 00:18:00.769 fused_ordering(470) 00:18:00.769 fused_ordering(471) 00:18:00.769 fused_ordering(472) 00:18:00.769 fused_ordering(473) 00:18:00.769 fused_ordering(474) 00:18:00.769 fused_ordering(475) 00:18:00.769 fused_ordering(476) 00:18:00.769 fused_ordering(477) 00:18:00.769 fused_ordering(478) 00:18:00.769 fused_ordering(479) 00:18:00.769 fused_ordering(480) 00:18:00.769 fused_ordering(481) 00:18:00.769 fused_ordering(482) 00:18:00.769 fused_ordering(483) 00:18:00.769 fused_ordering(484) 00:18:00.769 fused_ordering(485) 00:18:00.769 fused_ordering(486) 00:18:00.769 fused_ordering(487) 00:18:00.769 fused_ordering(488) 00:18:00.769 fused_ordering(489) 00:18:00.769 fused_ordering(490) 00:18:00.769 fused_ordering(491) 00:18:00.769 fused_ordering(492) 00:18:00.769 fused_ordering(493) 00:18:00.769 fused_ordering(494) 00:18:00.769 fused_ordering(495) 00:18:00.769 fused_ordering(496) 00:18:00.769 fused_ordering(497) 00:18:00.769 fused_ordering(498) 00:18:00.769 fused_ordering(499) 00:18:00.769 fused_ordering(500) 00:18:00.769 fused_ordering(501) 00:18:00.769 fused_ordering(502) 00:18:00.769 fused_ordering(503) 00:18:00.769 fused_ordering(504) 00:18:00.769 fused_ordering(505) 00:18:00.769 fused_ordering(506) 00:18:00.769 fused_ordering(507) 00:18:00.769 fused_ordering(508) 00:18:00.769 fused_ordering(509) 00:18:00.769 fused_ordering(510) 00:18:00.769 fused_ordering(511) 00:18:00.769 fused_ordering(512) 00:18:00.769 fused_ordering(513) 00:18:00.769 fused_ordering(514) 00:18:00.769 fused_ordering(515) 00:18:00.769 fused_ordering(516) 00:18:00.769 fused_ordering(517) 00:18:00.769 fused_ordering(518) 00:18:00.769 fused_ordering(519) 00:18:00.769 fused_ordering(520) 00:18:00.769 fused_ordering(521) 00:18:00.769 fused_ordering(522) 00:18:00.769 fused_ordering(523) 00:18:00.769 fused_ordering(524) 00:18:00.769 fused_ordering(525) 00:18:00.769 fused_ordering(526) 00:18:00.769 fused_ordering(527) 00:18:00.769 fused_ordering(528) 00:18:00.769 fused_ordering(529) 00:18:00.769 fused_ordering(530) 00:18:00.769 fused_ordering(531) 00:18:00.769 fused_ordering(532) 00:18:00.769 fused_ordering(533) 00:18:00.769 fused_ordering(534) 00:18:00.769 fused_ordering(535) 00:18:00.769 fused_ordering(536) 00:18:00.769 fused_ordering(537) 00:18:00.769 fused_ordering(538) 00:18:00.769 fused_ordering(539) 00:18:00.769 fused_ordering(540) 00:18:00.769 fused_ordering(541) 00:18:00.769 fused_ordering(542) 00:18:00.769 fused_ordering(543) 00:18:00.769 fused_ordering(544) 00:18:00.769 fused_ordering(545) 00:18:00.769 fused_ordering(546) 00:18:00.769 fused_ordering(547) 00:18:00.769 fused_ordering(548) 00:18:00.769 fused_ordering(549) 00:18:00.769 fused_ordering(550) 00:18:00.769 fused_ordering(551) 00:18:00.769 fused_ordering(552) 00:18:00.769 fused_ordering(553) 00:18:00.769 fused_ordering(554) 00:18:00.769 fused_ordering(555) 00:18:00.769 fused_ordering(556) 00:18:00.769 fused_ordering(557) 00:18:00.769 fused_ordering(558) 00:18:00.769 fused_ordering(559) 00:18:00.769 fused_ordering(560) 00:18:00.769 fused_ordering(561) 00:18:00.769 fused_ordering(562) 00:18:00.769 fused_ordering(563) 00:18:00.769 fused_ordering(564) 00:18:00.769 fused_ordering(565) 00:18:00.769 fused_ordering(566) 00:18:00.769 fused_ordering(567) 00:18:00.769 fused_ordering(568) 00:18:00.769 fused_ordering(569) 00:18:00.769 fused_ordering(570) 00:18:00.769 fused_ordering(571) 00:18:00.769 fused_ordering(572) 00:18:00.769 fused_ordering(573) 00:18:00.769 fused_ordering(574) 00:18:00.769 fused_ordering(575) 00:18:00.769 fused_ordering(576) 00:18:00.769 fused_ordering(577) 00:18:00.769 fused_ordering(578) 00:18:00.769 fused_ordering(579) 00:18:00.770 fused_ordering(580) 00:18:00.770 fused_ordering(581) 00:18:00.770 fused_ordering(582) 00:18:00.770 fused_ordering(583) 00:18:00.770 fused_ordering(584) 00:18:00.770 fused_ordering(585) 00:18:00.770 fused_ordering(586) 00:18:00.770 fused_ordering(587) 00:18:00.770 fused_ordering(588) 00:18:00.770 fused_ordering(589) 00:18:00.770 fused_ordering(590) 00:18:00.770 fused_ordering(591) 00:18:00.770 fused_ordering(592) 00:18:00.770 fused_ordering(593) 00:18:00.770 fused_ordering(594) 00:18:00.770 fused_ordering(595) 00:18:00.770 fused_ordering(596) 00:18:00.770 fused_ordering(597) 00:18:00.770 fused_ordering(598) 00:18:00.770 fused_ordering(599) 00:18:00.770 fused_ordering(600) 00:18:00.770 fused_ordering(601) 00:18:00.770 fused_ordering(602) 00:18:00.770 fused_ordering(603) 00:18:00.770 fused_ordering(604) 00:18:00.770 fused_ordering(605) 00:18:00.770 fused_ordering(606) 00:18:00.770 fused_ordering(607) 00:18:00.770 fused_ordering(608) 00:18:00.770 fused_ordering(609) 00:18:00.770 fused_ordering(610) 00:18:00.770 fused_ordering(611) 00:18:00.770 fused_ordering(612) 00:18:00.770 fused_ordering(613) 00:18:00.770 fused_ordering(614) 00:18:00.770 fused_ordering(615) 00:18:01.335 fused_ordering(616) 00:18:01.335 fused_ordering(617) 00:18:01.335 fused_ordering(618) 00:18:01.335 fused_ordering(619) 00:18:01.335 fused_ordering(620) 00:18:01.335 fused_ordering(621) 00:18:01.335 fused_ordering(622) 00:18:01.335 fused_ordering(623) 00:18:01.335 fused_ordering(624) 00:18:01.335 fused_ordering(625) 00:18:01.335 fused_ordering(626) 00:18:01.335 fused_ordering(627) 00:18:01.335 fused_ordering(628) 00:18:01.335 fused_ordering(629) 00:18:01.335 fused_ordering(630) 00:18:01.335 fused_ordering(631) 00:18:01.335 fused_ordering(632) 00:18:01.335 fused_ordering(633) 00:18:01.335 fused_ordering(634) 00:18:01.335 fused_ordering(635) 00:18:01.335 fused_ordering(636) 00:18:01.335 fused_ordering(637) 00:18:01.335 fused_ordering(638) 00:18:01.335 fused_ordering(639) 00:18:01.335 fused_ordering(640) 00:18:01.335 fused_ordering(641) 00:18:01.335 fused_ordering(642) 00:18:01.335 fused_ordering(643) 00:18:01.335 fused_ordering(644) 00:18:01.335 fused_ordering(645) 00:18:01.335 fused_ordering(646) 00:18:01.335 fused_ordering(647) 00:18:01.335 fused_ordering(648) 00:18:01.335 fused_ordering(649) 00:18:01.335 fused_ordering(650) 00:18:01.335 fused_ordering(651) 00:18:01.335 fused_ordering(652) 00:18:01.335 fused_ordering(653) 00:18:01.335 fused_ordering(654) 00:18:01.335 fused_ordering(655) 00:18:01.335 fused_ordering(656) 00:18:01.335 fused_ordering(657) 00:18:01.335 fused_ordering(658) 00:18:01.335 fused_ordering(659) 00:18:01.335 fused_ordering(660) 00:18:01.335 fused_ordering(661) 00:18:01.335 fused_ordering(662) 00:18:01.335 fused_ordering(663) 00:18:01.335 fused_ordering(664) 00:18:01.335 fused_ordering(665) 00:18:01.335 fused_ordering(666) 00:18:01.336 fused_ordering(667) 00:18:01.336 fused_ordering(668) 00:18:01.336 fused_ordering(669) 00:18:01.336 fused_ordering(670) 00:18:01.336 fused_ordering(671) 00:18:01.336 fused_ordering(672) 00:18:01.336 fused_ordering(673) 00:18:01.336 fused_ordering(674) 00:18:01.336 fused_ordering(675) 00:18:01.336 fused_ordering(676) 00:18:01.336 fused_ordering(677) 00:18:01.336 fused_ordering(678) 00:18:01.336 fused_ordering(679) 00:18:01.336 fused_ordering(680) 00:18:01.336 fused_ordering(681) 00:18:01.336 fused_ordering(682) 00:18:01.336 fused_ordering(683) 00:18:01.336 fused_ordering(684) 00:18:01.336 fused_ordering(685) 00:18:01.336 fused_ordering(686) 00:18:01.336 fused_ordering(687) 00:18:01.336 fused_ordering(688) 00:18:01.336 fused_ordering(689) 00:18:01.336 fused_ordering(690) 00:18:01.336 fused_ordering(691) 00:18:01.336 fused_ordering(692) 00:18:01.336 fused_ordering(693) 00:18:01.336 fused_ordering(694) 00:18:01.336 fused_ordering(695) 00:18:01.336 fused_ordering(696) 00:18:01.336 fused_ordering(697) 00:18:01.336 fused_ordering(698) 00:18:01.336 fused_ordering(699) 00:18:01.336 fused_ordering(700) 00:18:01.336 fused_ordering(701) 00:18:01.336 fused_ordering(702) 00:18:01.336 fused_ordering(703) 00:18:01.336 fused_ordering(704) 00:18:01.336 fused_ordering(705) 00:18:01.336 fused_ordering(706) 00:18:01.336 fused_ordering(707) 00:18:01.336 fused_ordering(708) 00:18:01.336 fused_ordering(709) 00:18:01.336 fused_ordering(710) 00:18:01.336 fused_ordering(711) 00:18:01.336 fused_ordering(712) 00:18:01.336 fused_ordering(713) 00:18:01.336 fused_ordering(714) 00:18:01.336 fused_ordering(715) 00:18:01.336 fused_ordering(716) 00:18:01.336 fused_ordering(717) 00:18:01.336 fused_ordering(718) 00:18:01.336 fused_ordering(719) 00:18:01.336 fused_ordering(720) 00:18:01.336 fused_ordering(721) 00:18:01.336 fused_ordering(722) 00:18:01.336 fused_ordering(723) 00:18:01.336 fused_ordering(724) 00:18:01.336 fused_ordering(725) 00:18:01.336 fused_ordering(726) 00:18:01.336 fused_ordering(727) 00:18:01.336 fused_ordering(728) 00:18:01.336 fused_ordering(729) 00:18:01.336 fused_ordering(730) 00:18:01.336 fused_ordering(731) 00:18:01.336 fused_ordering(732) 00:18:01.336 fused_ordering(733) 00:18:01.336 fused_ordering(734) 00:18:01.336 fused_ordering(735) 00:18:01.336 fused_ordering(736) 00:18:01.336 fused_ordering(737) 00:18:01.336 fused_ordering(738) 00:18:01.336 fused_ordering(739) 00:18:01.336 fused_ordering(740) 00:18:01.336 fused_ordering(741) 00:18:01.336 fused_ordering(742) 00:18:01.336 fused_ordering(743) 00:18:01.336 fused_ordering(744) 00:18:01.336 fused_ordering(745) 00:18:01.336 fused_ordering(746) 00:18:01.336 fused_ordering(747) 00:18:01.336 fused_ordering(748) 00:18:01.336 fused_ordering(749) 00:18:01.336 fused_ordering(750) 00:18:01.336 fused_ordering(751) 00:18:01.336 fused_ordering(752) 00:18:01.336 fused_ordering(753) 00:18:01.336 fused_ordering(754) 00:18:01.336 fused_ordering(755) 00:18:01.336 fused_ordering(756) 00:18:01.336 fused_ordering(757) 00:18:01.336 fused_ordering(758) 00:18:01.336 fused_ordering(759) 00:18:01.336 fused_ordering(760) 00:18:01.336 fused_ordering(761) 00:18:01.336 fused_ordering(762) 00:18:01.336 fused_ordering(763) 00:18:01.336 fused_ordering(764) 00:18:01.336 fused_ordering(765) 00:18:01.336 fused_ordering(766) 00:18:01.336 fused_ordering(767) 00:18:01.336 fused_ordering(768) 00:18:01.336 fused_ordering(769) 00:18:01.336 fused_ordering(770) 00:18:01.336 fused_ordering(771) 00:18:01.336 fused_ordering(772) 00:18:01.336 fused_ordering(773) 00:18:01.336 fused_ordering(774) 00:18:01.336 fused_ordering(775) 00:18:01.336 fused_ordering(776) 00:18:01.336 fused_ordering(777) 00:18:01.336 fused_ordering(778) 00:18:01.336 fused_ordering(779) 00:18:01.336 fused_ordering(780) 00:18:01.336 fused_ordering(781) 00:18:01.336 fused_ordering(782) 00:18:01.336 fused_ordering(783) 00:18:01.336 fused_ordering(784) 00:18:01.336 fused_ordering(785) 00:18:01.336 fused_ordering(786) 00:18:01.336 fused_ordering(787) 00:18:01.336 fused_ordering(788) 00:18:01.336 fused_ordering(789) 00:18:01.336 fused_ordering(790) 00:18:01.336 fused_ordering(791) 00:18:01.336 fused_ordering(792) 00:18:01.336 fused_ordering(793) 00:18:01.336 fused_ordering(794) 00:18:01.336 fused_ordering(795) 00:18:01.336 fused_ordering(796) 00:18:01.336 fused_ordering(797) 00:18:01.336 fused_ordering(798) 00:18:01.336 fused_ordering(799) 00:18:01.336 fused_ordering(800) 00:18:01.336 fused_ordering(801) 00:18:01.336 fused_ordering(802) 00:18:01.336 fused_ordering(803) 00:18:01.336 fused_ordering(804) 00:18:01.336 fused_ordering(805) 00:18:01.336 fused_ordering(806) 00:18:01.336 fused_ordering(807) 00:18:01.336 fused_ordering(808) 00:18:01.336 fused_ordering(809) 00:18:01.336 fused_ordering(810) 00:18:01.336 fused_ordering(811) 00:18:01.336 fused_ordering(812) 00:18:01.336 fused_ordering(813) 00:18:01.336 fused_ordering(814) 00:18:01.336 fused_ordering(815) 00:18:01.336 fused_ordering(816) 00:18:01.336 fused_ordering(817) 00:18:01.336 fused_ordering(818) 00:18:01.336 fused_ordering(819) 00:18:01.336 fused_ordering(820) 00:18:02.270 fused_ordering(821) 00:18:02.270 fused_ordering(822) 00:18:02.270 fused_ordering(823) 00:18:02.270 fused_ordering(824) 00:18:02.270 fused_ordering(825) 00:18:02.270 fused_ordering(826) 00:18:02.270 fused_ordering(827) 00:18:02.270 fused_ordering(828) 00:18:02.270 fused_ordering(829) 00:18:02.270 fused_ordering(830) 00:18:02.270 fused_ordering(831) 00:18:02.270 fused_ordering(832) 00:18:02.270 fused_ordering(833) 00:18:02.270 fused_ordering(834) 00:18:02.270 fused_ordering(835) 00:18:02.270 fused_ordering(836) 00:18:02.270 fused_ordering(837) 00:18:02.270 fused_ordering(838) 00:18:02.270 fused_ordering(839) 00:18:02.270 fused_ordering(840) 00:18:02.270 fused_ordering(841) 00:18:02.270 fused_ordering(842) 00:18:02.270 fused_ordering(843) 00:18:02.270 fused_ordering(844) 00:18:02.270 fused_ordering(845) 00:18:02.270 fused_ordering(846) 00:18:02.270 fused_ordering(847) 00:18:02.270 fused_ordering(848) 00:18:02.270 fused_ordering(849) 00:18:02.270 fused_ordering(850) 00:18:02.270 fused_ordering(851) 00:18:02.270 fused_ordering(852) 00:18:02.270 fused_ordering(853) 00:18:02.270 fused_ordering(854) 00:18:02.270 fused_ordering(855) 00:18:02.270 fused_ordering(856) 00:18:02.270 fused_ordering(857) 00:18:02.270 fused_ordering(858) 00:18:02.270 fused_ordering(859) 00:18:02.270 fused_ordering(860) 00:18:02.270 fused_ordering(861) 00:18:02.270 fused_ordering(862) 00:18:02.270 fused_ordering(863) 00:18:02.270 fused_ordering(864) 00:18:02.270 fused_ordering(865) 00:18:02.270 fused_ordering(866) 00:18:02.270 fused_ordering(867) 00:18:02.270 fused_ordering(868) 00:18:02.270 fused_ordering(869) 00:18:02.270 fused_ordering(870) 00:18:02.270 fused_ordering(871) 00:18:02.270 fused_ordering(872) 00:18:02.270 fused_ordering(873) 00:18:02.270 fused_ordering(874) 00:18:02.270 fused_ordering(875) 00:18:02.270 fused_ordering(876) 00:18:02.270 fused_ordering(877) 00:18:02.270 fused_ordering(878) 00:18:02.270 fused_ordering(879) 00:18:02.270 fused_ordering(880) 00:18:02.270 fused_ordering(881) 00:18:02.270 fused_ordering(882) 00:18:02.270 fused_ordering(883) 00:18:02.270 fused_ordering(884) 00:18:02.270 fused_ordering(885) 00:18:02.270 fused_ordering(886) 00:18:02.270 fused_ordering(887) 00:18:02.270 fused_ordering(888) 00:18:02.270 fused_ordering(889) 00:18:02.270 fused_ordering(890) 00:18:02.270 fused_ordering(891) 00:18:02.270 fused_ordering(892) 00:18:02.270 fused_ordering(893) 00:18:02.270 fused_ordering(894) 00:18:02.270 fused_ordering(895) 00:18:02.270 fused_ordering(896) 00:18:02.270 fused_ordering(897) 00:18:02.270 fused_ordering(898) 00:18:02.270 fused_ordering(899) 00:18:02.270 fused_ordering(900) 00:18:02.270 fused_ordering(901) 00:18:02.270 fused_ordering(902) 00:18:02.270 fused_ordering(903) 00:18:02.270 fused_ordering(904) 00:18:02.270 fused_ordering(905) 00:18:02.270 fused_ordering(906) 00:18:02.270 fused_ordering(907) 00:18:02.270 fused_ordering(908) 00:18:02.270 fused_ordering(909) 00:18:02.270 fused_ordering(910) 00:18:02.270 fused_ordering(911) 00:18:02.270 fused_ordering(912) 00:18:02.270 fused_ordering(913) 00:18:02.270 fused_ordering(914) 00:18:02.270 fused_ordering(915) 00:18:02.270 fused_ordering(916) 00:18:02.270 fused_ordering(917) 00:18:02.271 fused_ordering(918) 00:18:02.271 fused_ordering(919) 00:18:02.271 fused_ordering(920) 00:18:02.271 fused_ordering(921) 00:18:02.271 fused_ordering(922) 00:18:02.271 fused_ordering(923) 00:18:02.271 fused_ordering(924) 00:18:02.271 fused_ordering(925) 00:18:02.271 fused_ordering(926) 00:18:02.271 fused_ordering(927) 00:18:02.271 fused_ordering(928) 00:18:02.271 fused_ordering(929) 00:18:02.271 fused_ordering(930) 00:18:02.271 fused_ordering(931) 00:18:02.271 fused_ordering(932) 00:18:02.271 fused_ordering(933) 00:18:02.271 fused_ordering(934) 00:18:02.271 fused_ordering(935) 00:18:02.271 fused_ordering(936) 00:18:02.271 fused_ordering(937) 00:18:02.271 fused_ordering(938) 00:18:02.271 fused_ordering(939) 00:18:02.271 fused_ordering(940) 00:18:02.271 fused_ordering(941) 00:18:02.271 fused_ordering(942) 00:18:02.271 fused_ordering(943) 00:18:02.271 fused_ordering(944) 00:18:02.271 fused_ordering(945) 00:18:02.271 fused_ordering(946) 00:18:02.271 fused_ordering(947) 00:18:02.271 fused_ordering(948) 00:18:02.271 fused_ordering(949) 00:18:02.271 fused_ordering(950) 00:18:02.271 fused_ordering(951) 00:18:02.271 fused_ordering(952) 00:18:02.271 fused_ordering(953) 00:18:02.271 fused_ordering(954) 00:18:02.271 fused_ordering(955) 00:18:02.271 fused_ordering(956) 00:18:02.271 fused_ordering(957) 00:18:02.271 fused_ordering(958) 00:18:02.271 fused_ordering(959) 00:18:02.271 fused_ordering(960) 00:18:02.271 fused_ordering(961) 00:18:02.271 fused_ordering(962) 00:18:02.271 fused_ordering(963) 00:18:02.271 fused_ordering(964) 00:18:02.271 fused_ordering(965) 00:18:02.271 fused_ordering(966) 00:18:02.271 fused_ordering(967) 00:18:02.271 fused_ordering(968) 00:18:02.271 fused_ordering(969) 00:18:02.271 fused_ordering(970) 00:18:02.271 fused_ordering(971) 00:18:02.271 fused_ordering(972) 00:18:02.271 fused_ordering(973) 00:18:02.271 fused_ordering(974) 00:18:02.271 fused_ordering(975) 00:18:02.271 fused_ordering(976) 00:18:02.271 fused_ordering(977) 00:18:02.271 fused_ordering(978) 00:18:02.271 fused_ordering(979) 00:18:02.271 fused_ordering(980) 00:18:02.271 fused_ordering(981) 00:18:02.271 fused_ordering(982) 00:18:02.271 fused_ordering(983) 00:18:02.271 fused_ordering(984) 00:18:02.271 fused_ordering(985) 00:18:02.271 fused_ordering(986) 00:18:02.271 fused_ordering(987) 00:18:02.271 fused_ordering(988) 00:18:02.271 fused_ordering(989) 00:18:02.271 fused_ordering(990) 00:18:02.271 fused_ordering(991) 00:18:02.271 fused_ordering(992) 00:18:02.271 fused_ordering(993) 00:18:02.271 fused_ordering(994) 00:18:02.271 fused_ordering(995) 00:18:02.271 fused_ordering(996) 00:18:02.271 fused_ordering(997) 00:18:02.271 fused_ordering(998) 00:18:02.271 fused_ordering(999) 00:18:02.271 fused_ordering(1000) 00:18:02.271 fused_ordering(1001) 00:18:02.271 fused_ordering(1002) 00:18:02.271 fused_ordering(1003) 00:18:02.271 fused_ordering(1004) 00:18:02.271 fused_ordering(1005) 00:18:02.271 fused_ordering(1006) 00:18:02.271 fused_ordering(1007) 00:18:02.271 fused_ordering(1008) 00:18:02.271 fused_ordering(1009) 00:18:02.271 fused_ordering(1010) 00:18:02.271 fused_ordering(1011) 00:18:02.271 fused_ordering(1012) 00:18:02.271 fused_ordering(1013) 00:18:02.271 fused_ordering(1014) 00:18:02.271 fused_ordering(1015) 00:18:02.271 fused_ordering(1016) 00:18:02.271 fused_ordering(1017) 00:18:02.271 fused_ordering(1018) 00:18:02.271 fused_ordering(1019) 00:18:02.271 fused_ordering(1020) 00:18:02.271 fused_ordering(1021) 00:18:02.271 fused_ordering(1022) 00:18:02.271 fused_ordering(1023) 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:02.271 rmmod nvme_tcp 00:18:02.271 rmmod nvme_fabrics 00:18:02.271 rmmod nvme_keyring 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 2973460 ']' 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 2973460 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2973460 ']' 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2973460 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2973460 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2973460' 00:18:02.271 killing process with pid 2973460 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2973460 00:18:02.271 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2973460 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.646 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:05.550 00:18:05.550 real 0m10.124s 00:18:05.550 user 0m8.407s 00:18:05.550 sys 0m3.585s 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:05.550 ************************************ 00:18:05.550 END TEST nvmf_fused_ordering 00:18:05.550 ************************************ 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.550 ************************************ 00:18:05.550 START TEST nvmf_ns_masking 00:18:05.550 ************************************ 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:05.550 * Looking for test storage... 00:18:05.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:05.550 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:05.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.551 --rc genhtml_branch_coverage=1 00:18:05.551 --rc genhtml_function_coverage=1 00:18:05.551 --rc genhtml_legend=1 00:18:05.551 --rc geninfo_all_blocks=1 00:18:05.551 --rc geninfo_unexecuted_blocks=1 00:18:05.551 00:18:05.551 ' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:05.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.551 --rc genhtml_branch_coverage=1 00:18:05.551 --rc genhtml_function_coverage=1 00:18:05.551 --rc genhtml_legend=1 00:18:05.551 --rc geninfo_all_blocks=1 00:18:05.551 --rc geninfo_unexecuted_blocks=1 00:18:05.551 00:18:05.551 ' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:05.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.551 --rc genhtml_branch_coverage=1 00:18:05.551 --rc genhtml_function_coverage=1 00:18:05.551 --rc genhtml_legend=1 00:18:05.551 --rc geninfo_all_blocks=1 00:18:05.551 --rc geninfo_unexecuted_blocks=1 00:18:05.551 00:18:05.551 ' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:05.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.551 --rc genhtml_branch_coverage=1 00:18:05.551 --rc genhtml_function_coverage=1 00:18:05.551 --rc genhtml_legend=1 00:18:05.551 --rc geninfo_all_blocks=1 00:18:05.551 --rc geninfo_unexecuted_blocks=1 00:18:05.551 00:18:05.551 ' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:05.551 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=592af495-7116-4cc5-ad6b-3d6d6a2af791 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=806196b6-bd96-44a9-8a40-0b73bc579c3b 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c99f2efc-5740-4c82-afc1-8a50dee3aaa3 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:05.810 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:07.714 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:07.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:18:07.973 00:18:07.973 --- 10.0.0.2 ping statistics --- 00:18:07.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.973 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:18:07.973 00:18:07.973 --- 10.0.0.1 ping statistics --- 00:18:07.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.973 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=2976024 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 2976024 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2976024 ']' 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.973 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.974 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.974 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:07.974 [2024-10-13 19:47:57.652922] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:18:07.974 [2024-10-13 19:47:57.653070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.974 [2024-10-13 19:47:57.785749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.245 [2024-10-13 19:47:57.915329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.245 [2024-10-13 19:47:57.915441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.245 [2024-10-13 19:47:57.915469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.245 [2024-10-13 19:47:57.915493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.245 [2024-10-13 19:47:57.915512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.245 [2024-10-13 19:47:57.917146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.191 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:09.191 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:09.191 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:09.191 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:09.191 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:09.191 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.191 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:09.191 [2024-10-13 19:47:58.984537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.191 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:09.191 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:09.449 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:09.706 Malloc1 00:18:09.706 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:09.964 Malloc2 00:18:09.964 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:10.222 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:10.788 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.788 [2024-10-13 19:48:00.564363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.788 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:10.788 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c99f2efc-5740-4c82-afc1-8a50dee3aaa3 -a 10.0.0.2 -s 4420 -i 4 00:18:11.047 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.047 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:11.047 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.047 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:11.047 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:12.946 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:12.946 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:12.946 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:12.946 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:12.946 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.946 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:12.946 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:12.946 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:13.203 [ 0]:0x1 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9298a248595d44378275b0c357b557b0 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9298a248595d44378275b0c357b557b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:13.203 19:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:13.461 [ 0]:0x1 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9298a248595d44378275b0c357b557b0 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9298a248595d44378275b0c357b557b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:13.461 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:13.461 [ 1]:0x2 00:18:13.462 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:13.462 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:13.462 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c09b3265e955430f9f6073f27fe99969 00:18:13.462 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c09b3265e955430f9f6073f27fe99969 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:13.462 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:13.462 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:13.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.719 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:13.975 19:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:14.597 19:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:14.597 19:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c99f2efc-5740-4c82-afc1-8a50dee3aaa3 -a 10.0.0.2 -s 4420 -i 4 00:18:14.597 19:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:14.597 19:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.597 19:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.597 19:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:14.597 19:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:14.597 19:48:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:16.495 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:16.495 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:16.495 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:16.495 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:16.495 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.495 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:16.495 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:16.495 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:16.753 [ 0]:0x2 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c09b3265e955430f9f6073f27fe99969 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c09b3265e955430f9f6073f27fe99969 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:16.753 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.011 [ 0]:0x1 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9298a248595d44378275b0c357b557b0 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9298a248595d44378275b0c357b557b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.011 [ 1]:0x2 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.011 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.269 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c09b3265e955430f9f6073f27fe99969 00:18:17.269 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c09b3265e955430f9f6073f27fe99969 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.269 19:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.527 [ 0]:0x2 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c09b3265e955430f9f6073f27fe99969 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c09b3265e955430f9f6073f27fe99969 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.527 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.785 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:17.785 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c99f2efc-5740-4c82-afc1-8a50dee3aaa3 -a 10.0.0.2 -s 4420 -i 4 00:18:18.043 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:18.043 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:18.043 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.043 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:18.043 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:18.043 19:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:19.942 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:19.942 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:19.942 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:19.942 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:19.942 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.942 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:19.942 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:19.942 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.200 [ 0]:0x1 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9298a248595d44378275b0c357b557b0 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9298a248595d44378275b0c357b557b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.200 [ 1]:0x2 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c09b3265e955430f9f6073f27fe99969 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c09b3265e955430f9f6073f27fe99969 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.200 19:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.458 [ 0]:0x2 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.458 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c09b3265e955430f9f6073f27fe99969 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c09b3265e955430f9f6073f27fe99969 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:20.716 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:20.974 [2024-10-13 19:48:10.564175] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:20.974 request: 00:18:20.974 { 00:18:20.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.974 "nsid": 2, 00:18:20.974 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.974 "method": "nvmf_ns_remove_host", 00:18:20.974 "req_id": 1 00:18:20.974 } 00:18:20.974 Got JSON-RPC error response 00:18:20.974 response: 00:18:20.974 { 00:18:20.974 "code": -32602, 00:18:20.974 "message": "Invalid parameters" 00:18:20.974 } 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.975 [ 0]:0x2 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c09b3265e955430f9f6073f27fe99969 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c09b3265e955430f9f6073f27fe99969 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:20.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2978341 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2978341 /var/tmp/host.sock 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2978341 ']' 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:20.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.975 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:21.233 [2024-10-13 19:48:10.813986] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:18:21.233 [2024-10-13 19:48:10.814130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2978341 ] 00:18:21.233 [2024-10-13 19:48:10.940482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.491 [2024-10-13 19:48:11.072095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.425 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:22.425 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:22.425 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.683 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:22.940 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 592af495-7116-4cc5-ad6b-3d6d6a2af791 00:18:22.940 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:22.941 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 592AF49571164CC5AD6B3D6D6A2AF791 -i 00:18:23.198 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 806196b6-bd96-44a9-8a40-0b73bc579c3b 00:18:23.198 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:23.198 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 806196B6BD9644A98A400B73BC579C3B -i 00:18:23.764 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:24.021 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:24.279 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:24.279 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:24.538 nvme0n1 00:18:24.538 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:24.538 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:25.103 nvme1n2 00:18:25.103 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:25.103 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:25.103 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:25.103 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:25.103 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:25.361 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:25.361 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:25.361 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:25.361 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:25.619 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 592af495-7116-4cc5-ad6b-3d6d6a2af791 == \5\9\2\a\f\4\9\5\-\7\1\1\6\-\4\c\c\5\-\a\d\6\b\-\3\d\6\d\6\a\2\a\f\7\9\1 ]] 00:18:25.619 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:25.619 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:25.619 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 806196b6-bd96-44a9-8a40-0b73bc579c3b == \8\0\6\1\9\6\b\6\-\b\d\9\6\-\4\4\a\9\-\8\a\4\0\-\0\b\7\3\b\c\5\7\9\c\3\b ]] 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2978341 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2978341 ']' 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2978341 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2978341 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2978341' 00:18:25.877 killing process with pid 2978341 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2978341 00:18:25.877 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2978341 00:18:28.405 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.405 rmmod nvme_tcp 00:18:28.405 rmmod nvme_fabrics 00:18:28.405 rmmod nvme_keyring 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 2976024 ']' 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 2976024 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2976024 ']' 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2976024 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.405 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2976024 00:18:28.663 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.663 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.663 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2976024' 00:18:28.663 killing process with pid 2976024 00:18:28.663 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2976024 00:18:28.663 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2976024 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.038 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:32.572 00:18:32.572 real 0m26.642s 00:18:32.572 user 0m36.839s 00:18:32.572 sys 0m4.606s 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:32.572 ************************************ 00:18:32.572 END TEST nvmf_ns_masking 00:18:32.572 ************************************ 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.572 ************************************ 00:18:32.572 START TEST nvmf_nvme_cli 00:18:32.572 ************************************ 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:32.572 * Looking for test storage... 00:18:32.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:18:32.572 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.572 --rc genhtml_branch_coverage=1 00:18:32.572 --rc genhtml_function_coverage=1 00:18:32.572 --rc genhtml_legend=1 00:18:32.572 --rc geninfo_all_blocks=1 00:18:32.572 --rc geninfo_unexecuted_blocks=1 00:18:32.572 00:18:32.572 ' 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.572 --rc genhtml_branch_coverage=1 00:18:32.572 --rc genhtml_function_coverage=1 00:18:32.572 --rc genhtml_legend=1 00:18:32.572 --rc geninfo_all_blocks=1 00:18:32.572 --rc geninfo_unexecuted_blocks=1 00:18:32.572 00:18:32.572 ' 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.572 --rc genhtml_branch_coverage=1 00:18:32.572 --rc genhtml_function_coverage=1 00:18:32.572 --rc genhtml_legend=1 00:18:32.572 --rc geninfo_all_blocks=1 00:18:32.572 --rc geninfo_unexecuted_blocks=1 00:18:32.572 00:18:32.572 ' 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.572 --rc genhtml_branch_coverage=1 00:18:32.572 --rc genhtml_function_coverage=1 00:18:32.572 --rc genhtml_legend=1 00:18:32.572 --rc geninfo_all_blocks=1 00:18:32.572 --rc geninfo_unexecuted_blocks=1 00:18:32.572 00:18:32.572 ' 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.572 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:32.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:32.573 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:34.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:34.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:34.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:34.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:34.474 19:48:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:34.474 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:34.474 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:34.474 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:34.474 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:34.474 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:34.474 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:34.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:18:34.475 00:18:34.475 --- 10.0.0.2 ping statistics --- 00:18:34.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.475 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:34.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:34.475 00:18:34.475 --- 10.0.0.1 ping statistics --- 00:18:34.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.475 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=2981362 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 2981362 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2981362 ']' 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.475 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:34.475 [2024-10-13 19:48:24.204062] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:18:34.475 [2024-10-13 19:48:24.204223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.733 [2024-10-13 19:48:24.347745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.733 [2024-10-13 19:48:24.494735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.733 [2024-10-13 19:48:24.494826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.733 [2024-10-13 19:48:24.494857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.733 [2024-10-13 19:48:24.494884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.733 [2024-10-13 19:48:24.494905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.733 [2024-10-13 19:48:24.497778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.733 [2024-10-13 19:48:24.497837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.733 [2024-10-13 19:48:24.497889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.733 [2024-10-13 19:48:24.497896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 [2024-10-13 19:48:25.196495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 Malloc0 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 Malloc1 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 [2024-10-13 19:48:25.388114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.668 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:35.926 00:18:35.926 Discovery Log Number of Records 2, Generation counter 2 00:18:35.926 =====Discovery Log Entry 0====== 00:18:35.926 trtype: tcp 00:18:35.926 adrfam: ipv4 00:18:35.926 subtype: current discovery subsystem 00:18:35.926 treq: not required 00:18:35.926 portid: 0 00:18:35.926 trsvcid: 4420 00:18:35.926 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:35.926 traddr: 10.0.0.2 00:18:35.926 eflags: explicit discovery connections, duplicate discovery information 00:18:35.926 sectype: none 00:18:35.926 =====Discovery Log Entry 1====== 00:18:35.926 trtype: tcp 00:18:35.926 adrfam: ipv4 00:18:35.926 subtype: nvme subsystem 00:18:35.926 treq: not required 00:18:35.926 portid: 0 00:18:35.926 trsvcid: 4420 00:18:35.926 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:35.926 traddr: 10.0.0.2 00:18:35.926 eflags: none 00:18:35.926 sectype: none 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:35.926 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:36.492 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:36.492 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:36.492 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.492 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:36.492 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:36.492 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:38.389 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:38.389 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:38.389 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:38.647 /dev/nvme0n2 ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:38.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:38.647 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:38.648 rmmod nvme_tcp 00:18:38.648 rmmod nvme_fabrics 00:18:38.648 rmmod nvme_keyring 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 2981362 ']' 00:18:38.648 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 2981362 00:18:38.905 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2981362 ']' 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2981362 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2981362 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2981362' 00:18:38.906 killing process with pid 2981362 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2981362 00:18:38.906 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2981362 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.279 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.184 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:42.184 00:18:42.184 real 0m10.072s 00:18:42.184 user 0m21.204s 00:18:42.184 sys 0m2.402s 00:18:42.184 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.184 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:42.184 ************************************ 00:18:42.184 END TEST nvmf_nvme_cli 00:18:42.184 ************************************ 00:18:42.184 19:48:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:42.184 19:48:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:42.184 19:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:42.184 19:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.184 19:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:42.443 ************************************ 00:18:42.443 START TEST nvmf_auth_target 00:18:42.443 ************************************ 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:42.443 * Looking for test storage... 00:18:42.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:42.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.443 --rc genhtml_branch_coverage=1 00:18:42.443 --rc genhtml_function_coverage=1 00:18:42.443 --rc genhtml_legend=1 00:18:42.443 --rc geninfo_all_blocks=1 00:18:42.443 --rc geninfo_unexecuted_blocks=1 00:18:42.443 00:18:42.443 ' 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:42.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.443 --rc genhtml_branch_coverage=1 00:18:42.443 --rc genhtml_function_coverage=1 00:18:42.443 --rc genhtml_legend=1 00:18:42.443 --rc geninfo_all_blocks=1 00:18:42.443 --rc geninfo_unexecuted_blocks=1 00:18:42.443 00:18:42.443 ' 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:42.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.443 --rc genhtml_branch_coverage=1 00:18:42.443 --rc genhtml_function_coverage=1 00:18:42.443 --rc genhtml_legend=1 00:18:42.443 --rc geninfo_all_blocks=1 00:18:42.443 --rc geninfo_unexecuted_blocks=1 00:18:42.443 00:18:42.443 ' 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:42.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.443 --rc genhtml_branch_coverage=1 00:18:42.443 --rc genhtml_function_coverage=1 00:18:42.443 --rc genhtml_legend=1 00:18:42.443 --rc geninfo_all_blocks=1 00:18:42.443 --rc geninfo_unexecuted_blocks=1 00:18:42.443 00:18:42.443 ' 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.443 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:42.444 19:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:44.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:44.973 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:44.974 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:44.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:44.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:44.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:18:44.974 00:18:44.974 --- 10.0.0.2 ping statistics --- 00:18:44.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.974 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:18:44.974 00:18:44.974 --- 10.0.0.1 ping statistics --- 00:18:44.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.974 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2984007 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2984007 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2984007 ']' 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.974 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2984158 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:45.909 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a376cb7348a3f47e536e7a4c18d5fabea75e29bf4c5ca28d 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.8Nx 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a376cb7348a3f47e536e7a4c18d5fabea75e29bf4c5ca28d 0 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a376cb7348a3f47e536e7a4c18d5fabea75e29bf4c5ca28d 0 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a376cb7348a3f47e536e7a4c18d5fabea75e29bf4c5ca28d 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.8Nx 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.8Nx 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.8Nx 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f96176321f7fb811625c1c85ca5ee4112b3531f065555d25c0c3cbe497b3218d 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.0VZ 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f96176321f7fb811625c1c85ca5ee4112b3531f065555d25c0c3cbe497b3218d 3 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f96176321f7fb811625c1c85ca5ee4112b3531f065555d25c0c3cbe497b3218d 3 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f96176321f7fb811625c1c85ca5ee4112b3531f065555d25c0c3cbe497b3218d 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.0VZ 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.0VZ 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0VZ 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a800952b1f00fc2d6a38358c9f6d541f 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.bav 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a800952b1f00fc2d6a38358c9f6d541f 1 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a800952b1f00fc2d6a38358c9f6d541f 1 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a800952b1f00fc2d6a38358c9f6d541f 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.bav 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.bav 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.bav 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0e102854e6cd073d50bd10bda7a3a078861d0158798df5c4 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.3DF 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0e102854e6cd073d50bd10bda7a3a078861d0158798df5c4 2 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0e102854e6cd073d50bd10bda7a3a078861d0158798df5c4 2 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0e102854e6cd073d50bd10bda7a3a078861d0158798df5c4 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:18:45.910 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.3DF 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.3DF 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.3DF 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0fc45b38688e8180b373092fb31c4c24490696467dd1ff93 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.nb5 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0fc45b38688e8180b373092fb31c4c24490696467dd1ff93 2 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0fc45b38688e8180b373092fb31c4c24490696467dd1ff93 2 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0fc45b38688e8180b373092fb31c4c24490696467dd1ff93 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.nb5 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.nb5 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.nb5 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d7afeb3fe1be2f498c0250146c07046f 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.o1k 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d7afeb3fe1be2f498c0250146c07046f 1 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d7afeb3fe1be2f498c0250146c07046f 1 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d7afeb3fe1be2f498c0250146c07046f 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:46.169 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.o1k 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.o1k 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.o1k 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8fea614369ebdca2e2e92013bdaf1adfae9a86e5a5b152852dca0163b263f35c 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.oWG 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8fea614369ebdca2e2e92013bdaf1adfae9a86e5a5b152852dca0163b263f35c 3 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8fea614369ebdca2e2e92013bdaf1adfae9a86e5a5b152852dca0163b263f35c 3 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8fea614369ebdca2e2e92013bdaf1adfae9a86e5a5b152852dca0163b263f35c 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.oWG 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.oWG 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.oWG 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2984007 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2984007 ']' 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.170 19:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2984158 /var/tmp/host.sock 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2984158 ']' 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:46.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.428 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8Nx 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8Nx 00:18:47.362 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8Nx 00:18:47.620 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0VZ ]] 00:18:47.620 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0VZ 00:18:47.620 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.620 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.620 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.620 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0VZ 00:18:47.620 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0VZ 00:18:47.878 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:47.878 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.bav 00:18:47.878 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.878 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.878 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.878 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.bav 00:18:47.878 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.bav 00:18:48.136 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.3DF ]] 00:18:48.136 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3DF 00:18:48.136 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.136 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.136 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.136 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3DF 00:18:48.136 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3DF 00:18:48.393 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:48.393 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nb5 00:18:48.393 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.393 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.393 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.393 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nb5 00:18:48.393 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nb5 00:18:48.650 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.o1k ]] 00:18:48.650 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o1k 00:18:48.650 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.650 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.650 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.650 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o1k 00:18:48.650 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o1k 00:18:48.908 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:48.908 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oWG 00:18:48.908 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.908 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.908 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.908 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oWG 00:18:48.908 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oWG 00:18:49.200 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:49.200 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:49.200 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.200 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.200 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.200 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.483 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.740 00:18:49.998 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.998 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.998 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.256 { 00:18:50.256 "cntlid": 1, 00:18:50.256 "qid": 0, 00:18:50.256 "state": "enabled", 00:18:50.256 "thread": "nvmf_tgt_poll_group_000", 00:18:50.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:50.256 "listen_address": { 00:18:50.256 "trtype": "TCP", 00:18:50.256 "adrfam": "IPv4", 00:18:50.256 "traddr": "10.0.0.2", 00:18:50.256 "trsvcid": "4420" 00:18:50.256 }, 00:18:50.256 "peer_address": { 00:18:50.256 "trtype": "TCP", 00:18:50.256 "adrfam": "IPv4", 00:18:50.256 "traddr": "10.0.0.1", 00:18:50.256 "trsvcid": "50594" 00:18:50.256 }, 00:18:50.256 "auth": { 00:18:50.256 "state": "completed", 00:18:50.256 "digest": "sha256", 00:18:50.256 "dhgroup": "null" 00:18:50.256 } 00:18:50.256 } 00:18:50.256 ]' 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.256 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.514 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:18:50.514 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:18:51.447 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.447 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.447 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.447 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.447 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.447 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.447 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.447 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.705 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.962 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.962 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.962 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.962 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.220 00:18:52.220 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.220 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.220 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.478 { 00:18:52.478 "cntlid": 3, 00:18:52.478 "qid": 0, 00:18:52.478 "state": "enabled", 00:18:52.478 "thread": "nvmf_tgt_poll_group_000", 00:18:52.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:52.478 "listen_address": { 00:18:52.478 "trtype": "TCP", 00:18:52.478 "adrfam": "IPv4", 00:18:52.478 "traddr": "10.0.0.2", 00:18:52.478 "trsvcid": "4420" 00:18:52.478 }, 00:18:52.478 "peer_address": { 00:18:52.478 "trtype": "TCP", 00:18:52.478 "adrfam": "IPv4", 00:18:52.478 "traddr": "10.0.0.1", 00:18:52.478 "trsvcid": "50614" 00:18:52.478 }, 00:18:52.478 "auth": { 00:18:52.478 "state": "completed", 00:18:52.478 "digest": "sha256", 00:18:52.478 "dhgroup": "null" 00:18:52.478 } 00:18:52.478 } 00:18:52.478 ]' 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.478 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.736 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:18:52.736 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.110 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.367 00:18:54.367 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.367 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.367 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.625 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.625 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.625 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.625 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.882 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.882 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.882 { 00:18:54.882 "cntlid": 5, 00:18:54.882 "qid": 0, 00:18:54.882 "state": "enabled", 00:18:54.882 "thread": "nvmf_tgt_poll_group_000", 00:18:54.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:54.882 "listen_address": { 00:18:54.882 "trtype": "TCP", 00:18:54.882 "adrfam": "IPv4", 00:18:54.882 "traddr": "10.0.0.2", 00:18:54.882 "trsvcid": "4420" 00:18:54.882 }, 00:18:54.882 "peer_address": { 00:18:54.882 "trtype": "TCP", 00:18:54.882 "adrfam": "IPv4", 00:18:54.882 "traddr": "10.0.0.1", 00:18:54.882 "trsvcid": "50622" 00:18:54.882 }, 00:18:54.882 "auth": { 00:18:54.882 "state": "completed", 00:18:54.882 "digest": "sha256", 00:18:54.882 "dhgroup": "null" 00:18:54.882 } 00:18:54.882 } 00:18:54.882 ]' 00:18:54.882 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.883 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.883 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.883 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:54.883 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.883 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.883 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.883 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.141 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:18:55.141 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:18:56.074 19:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.074 19:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.074 19:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.074 19:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.074 19:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.074 19:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.074 19:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.074 19:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.332 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.589 00:18:56.845 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.845 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.845 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.102 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.102 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.102 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.102 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.102 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.102 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.102 { 00:18:57.102 "cntlid": 7, 00:18:57.102 "qid": 0, 00:18:57.102 "state": "enabled", 00:18:57.102 "thread": "nvmf_tgt_poll_group_000", 00:18:57.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:57.102 "listen_address": { 00:18:57.102 "trtype": "TCP", 00:18:57.102 "adrfam": "IPv4", 00:18:57.102 "traddr": "10.0.0.2", 00:18:57.102 "trsvcid": "4420" 00:18:57.102 }, 00:18:57.102 "peer_address": { 00:18:57.102 "trtype": "TCP", 00:18:57.102 "adrfam": "IPv4", 00:18:57.102 "traddr": "10.0.0.1", 00:18:57.102 "trsvcid": "50648" 00:18:57.102 }, 00:18:57.103 "auth": { 00:18:57.103 "state": "completed", 00:18:57.103 "digest": "sha256", 00:18:57.103 "dhgroup": "null" 00:18:57.103 } 00:18:57.103 } 00:18:57.103 ]' 00:18:57.103 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.103 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.103 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.103 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:57.103 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.103 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.103 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.103 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.361 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:18:57.361 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.294 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.551 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.809 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.809 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.809 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.809 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.066 00:18:59.066 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.066 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.066 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.324 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.324 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.324 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.324 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.324 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.324 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.324 { 00:18:59.324 "cntlid": 9, 00:18:59.324 "qid": 0, 00:18:59.324 "state": "enabled", 00:18:59.324 "thread": "nvmf_tgt_poll_group_000", 00:18:59.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:59.324 "listen_address": { 00:18:59.324 "trtype": "TCP", 00:18:59.324 "adrfam": "IPv4", 00:18:59.324 "traddr": "10.0.0.2", 00:18:59.324 "trsvcid": "4420" 00:18:59.324 }, 00:18:59.324 "peer_address": { 00:18:59.324 "trtype": "TCP", 00:18:59.324 "adrfam": "IPv4", 00:18:59.324 "traddr": "10.0.0.1", 00:18:59.324 "trsvcid": "32872" 00:18:59.324 }, 00:18:59.324 "auth": { 00:18:59.324 "state": "completed", 00:18:59.324 "digest": "sha256", 00:18:59.324 "dhgroup": "ffdhe2048" 00:18:59.324 } 00:18:59.324 } 00:18:59.324 ]' 00:18:59.324 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.324 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.324 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.324 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.324 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.324 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.324 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.324 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.582 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:18:59.582 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:00.954 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.955 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.520 00:19:01.520 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.520 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.520 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.777 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.777 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.777 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.777 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.777 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.777 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.777 { 00:19:01.777 "cntlid": 11, 00:19:01.777 "qid": 0, 00:19:01.777 "state": "enabled", 00:19:01.777 "thread": "nvmf_tgt_poll_group_000", 00:19:01.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:01.777 "listen_address": { 00:19:01.778 "trtype": "TCP", 00:19:01.778 "adrfam": "IPv4", 00:19:01.778 "traddr": "10.0.0.2", 00:19:01.778 "trsvcid": "4420" 00:19:01.778 }, 00:19:01.778 "peer_address": { 00:19:01.778 "trtype": "TCP", 00:19:01.778 "adrfam": "IPv4", 00:19:01.778 "traddr": "10.0.0.1", 00:19:01.778 "trsvcid": "32906" 00:19:01.778 }, 00:19:01.778 "auth": { 00:19:01.778 "state": "completed", 00:19:01.778 "digest": "sha256", 00:19:01.778 "dhgroup": "ffdhe2048" 00:19:01.778 } 00:19:01.778 } 00:19:01.778 ]' 00:19:01.778 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.778 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.778 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.778 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:01.778 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.778 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.778 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.778 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.035 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:02.035 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:02.968 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.968 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.968 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.968 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.968 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.968 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.968 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:02.968 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.226 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.792 00:19:03.792 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.792 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.792 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.051 { 00:19:04.051 "cntlid": 13, 00:19:04.051 "qid": 0, 00:19:04.051 "state": "enabled", 00:19:04.051 "thread": "nvmf_tgt_poll_group_000", 00:19:04.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:04.051 "listen_address": { 00:19:04.051 "trtype": "TCP", 00:19:04.051 "adrfam": "IPv4", 00:19:04.051 "traddr": "10.0.0.2", 00:19:04.051 "trsvcid": "4420" 00:19:04.051 }, 00:19:04.051 "peer_address": { 00:19:04.051 "trtype": "TCP", 00:19:04.051 "adrfam": "IPv4", 00:19:04.051 "traddr": "10.0.0.1", 00:19:04.051 "trsvcid": "32936" 00:19:04.051 }, 00:19:04.051 "auth": { 00:19:04.051 "state": "completed", 00:19:04.051 "digest": "sha256", 00:19:04.051 "dhgroup": "ffdhe2048" 00:19:04.051 } 00:19:04.051 } 00:19:04.051 ]' 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.051 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.309 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:04.309 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:05.242 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.242 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.242 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.242 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.242 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.242 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.242 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.242 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.808 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.065 00:19:06.065 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.065 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.065 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.324 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.324 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.324 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.324 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.324 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.324 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.324 { 00:19:06.324 "cntlid": 15, 00:19:06.324 "qid": 0, 00:19:06.324 "state": "enabled", 00:19:06.324 "thread": "nvmf_tgt_poll_group_000", 00:19:06.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:06.324 "listen_address": { 00:19:06.324 "trtype": "TCP", 00:19:06.324 "adrfam": "IPv4", 00:19:06.324 "traddr": "10.0.0.2", 00:19:06.324 "trsvcid": "4420" 00:19:06.324 }, 00:19:06.324 "peer_address": { 00:19:06.324 "trtype": "TCP", 00:19:06.324 "adrfam": "IPv4", 00:19:06.324 "traddr": "10.0.0.1", 00:19:06.324 "trsvcid": "32972" 00:19:06.324 }, 00:19:06.324 "auth": { 00:19:06.324 "state": "completed", 00:19:06.324 "digest": "sha256", 00:19:06.324 "dhgroup": "ffdhe2048" 00:19:06.324 } 00:19:06.324 } 00:19:06.324 ]' 00:19:06.324 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.324 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.324 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.324 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.324 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.324 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.324 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.324 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.582 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:06.582 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.516 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.774 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.347 00:19:08.347 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.347 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.347 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.605 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.605 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.605 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.605 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.605 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.605 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.605 { 00:19:08.605 "cntlid": 17, 00:19:08.605 "qid": 0, 00:19:08.605 "state": "enabled", 00:19:08.605 "thread": "nvmf_tgt_poll_group_000", 00:19:08.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:08.605 "listen_address": { 00:19:08.605 "trtype": "TCP", 00:19:08.605 "adrfam": "IPv4", 00:19:08.605 "traddr": "10.0.0.2", 00:19:08.605 "trsvcid": "4420" 00:19:08.605 }, 00:19:08.605 "peer_address": { 00:19:08.605 "trtype": "TCP", 00:19:08.605 "adrfam": "IPv4", 00:19:08.605 "traddr": "10.0.0.1", 00:19:08.605 "trsvcid": "33000" 00:19:08.605 }, 00:19:08.605 "auth": { 00:19:08.605 "state": "completed", 00:19:08.605 "digest": "sha256", 00:19:08.605 "dhgroup": "ffdhe3072" 00:19:08.605 } 00:19:08.605 } 00:19:08.605 ]' 00:19:08.606 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.606 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.606 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.606 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.606 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.606 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.606 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.606 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.864 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:08.864 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:09.796 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.054 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.054 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.054 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.054 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.054 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.054 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.054 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.312 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.570 00:19:10.570 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.570 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.570 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.828 { 00:19:10.828 "cntlid": 19, 00:19:10.828 "qid": 0, 00:19:10.828 "state": "enabled", 00:19:10.828 "thread": "nvmf_tgt_poll_group_000", 00:19:10.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:10.828 "listen_address": { 00:19:10.828 "trtype": "TCP", 00:19:10.828 "adrfam": "IPv4", 00:19:10.828 "traddr": "10.0.0.2", 00:19:10.828 "trsvcid": "4420" 00:19:10.828 }, 00:19:10.828 "peer_address": { 00:19:10.828 "trtype": "TCP", 00:19:10.828 "adrfam": "IPv4", 00:19:10.828 "traddr": "10.0.0.1", 00:19:10.828 "trsvcid": "42176" 00:19:10.828 }, 00:19:10.828 "auth": { 00:19:10.828 "state": "completed", 00:19:10.828 "digest": "sha256", 00:19:10.828 "dhgroup": "ffdhe3072" 00:19:10.828 } 00:19:10.828 } 00:19:10.828 ]' 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.828 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.085 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.085 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.085 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.085 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.085 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.343 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:11.343 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:12.330 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.330 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.330 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.330 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.330 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.330 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.330 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.330 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.611 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:12.611 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.612 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.913 00:19:12.913 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.913 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.913 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.172 { 00:19:13.172 "cntlid": 21, 00:19:13.172 "qid": 0, 00:19:13.172 "state": "enabled", 00:19:13.172 "thread": "nvmf_tgt_poll_group_000", 00:19:13.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:13.172 "listen_address": { 00:19:13.172 "trtype": "TCP", 00:19:13.172 "adrfam": "IPv4", 00:19:13.172 "traddr": "10.0.0.2", 00:19:13.172 "trsvcid": "4420" 00:19:13.172 }, 00:19:13.172 "peer_address": { 00:19:13.172 "trtype": "TCP", 00:19:13.172 "adrfam": "IPv4", 00:19:13.172 "traddr": "10.0.0.1", 00:19:13.172 "trsvcid": "42206" 00:19:13.172 }, 00:19:13.172 "auth": { 00:19:13.172 "state": "completed", 00:19:13.172 "digest": "sha256", 00:19:13.172 "dhgroup": "ffdhe3072" 00:19:13.172 } 00:19:13.172 } 00:19:13.172 ]' 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.172 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.430 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.430 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.430 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.430 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.430 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.688 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:13.688 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:14.621 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.621 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.621 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.621 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.621 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.621 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.621 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.621 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.879 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.443 00:19:15.443 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.443 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.443 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.701 { 00:19:15.701 "cntlid": 23, 00:19:15.701 "qid": 0, 00:19:15.701 "state": "enabled", 00:19:15.701 "thread": "nvmf_tgt_poll_group_000", 00:19:15.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:15.701 "listen_address": { 00:19:15.701 "trtype": "TCP", 00:19:15.701 "adrfam": "IPv4", 00:19:15.701 "traddr": "10.0.0.2", 00:19:15.701 "trsvcid": "4420" 00:19:15.701 }, 00:19:15.701 "peer_address": { 00:19:15.701 "trtype": "TCP", 00:19:15.701 "adrfam": "IPv4", 00:19:15.701 "traddr": "10.0.0.1", 00:19:15.701 "trsvcid": "42234" 00:19:15.701 }, 00:19:15.701 "auth": { 00:19:15.701 "state": "completed", 00:19:15.701 "digest": "sha256", 00:19:15.701 "dhgroup": "ffdhe3072" 00:19:15.701 } 00:19:15.701 } 00:19:15.701 ]' 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.701 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.959 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:15.959 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.892 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.457 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:17.457 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.457 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.457 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:17.457 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.458 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.458 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.458 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.458 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.458 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.458 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.458 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.458 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.716 00:19:17.716 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.716 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.716 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.974 { 00:19:17.974 "cntlid": 25, 00:19:17.974 "qid": 0, 00:19:17.974 "state": "enabled", 00:19:17.974 "thread": "nvmf_tgt_poll_group_000", 00:19:17.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:17.974 "listen_address": { 00:19:17.974 "trtype": "TCP", 00:19:17.974 "adrfam": "IPv4", 00:19:17.974 "traddr": "10.0.0.2", 00:19:17.974 "trsvcid": "4420" 00:19:17.974 }, 00:19:17.974 "peer_address": { 00:19:17.974 "trtype": "TCP", 00:19:17.974 "adrfam": "IPv4", 00:19:17.974 "traddr": "10.0.0.1", 00:19:17.974 "trsvcid": "42258" 00:19:17.974 }, 00:19:17.974 "auth": { 00:19:17.974 "state": "completed", 00:19:17.974 "digest": "sha256", 00:19:17.974 "dhgroup": "ffdhe4096" 00:19:17.974 } 00:19:17.974 } 00:19:17.974 ]' 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.974 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.238 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.238 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.238 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.502 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:18.502 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:19.436 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.436 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.436 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.436 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.436 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.436 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.436 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.436 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.694 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.261 00:19:20.261 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.261 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.261 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.519 { 00:19:20.519 "cntlid": 27, 00:19:20.519 "qid": 0, 00:19:20.519 "state": "enabled", 00:19:20.519 "thread": "nvmf_tgt_poll_group_000", 00:19:20.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:20.519 "listen_address": { 00:19:20.519 "trtype": "TCP", 00:19:20.519 "adrfam": "IPv4", 00:19:20.519 "traddr": "10.0.0.2", 00:19:20.519 "trsvcid": "4420" 00:19:20.519 }, 00:19:20.519 "peer_address": { 00:19:20.519 "trtype": "TCP", 00:19:20.519 "adrfam": "IPv4", 00:19:20.519 "traddr": "10.0.0.1", 00:19:20.519 "trsvcid": "55478" 00:19:20.519 }, 00:19:20.519 "auth": { 00:19:20.519 "state": "completed", 00:19:20.519 "digest": "sha256", 00:19:20.519 "dhgroup": "ffdhe4096" 00:19:20.519 } 00:19:20.519 } 00:19:20.519 ]' 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.519 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.777 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:20.777 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.149 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.715 00:19:22.715 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.715 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.715 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.973 { 00:19:22.973 "cntlid": 29, 00:19:22.973 "qid": 0, 00:19:22.973 "state": "enabled", 00:19:22.973 "thread": "nvmf_tgt_poll_group_000", 00:19:22.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:22.973 "listen_address": { 00:19:22.973 "trtype": "TCP", 00:19:22.973 "adrfam": "IPv4", 00:19:22.973 "traddr": "10.0.0.2", 00:19:22.973 "trsvcid": "4420" 00:19:22.973 }, 00:19:22.973 "peer_address": { 00:19:22.973 "trtype": "TCP", 00:19:22.973 "adrfam": "IPv4", 00:19:22.973 "traddr": "10.0.0.1", 00:19:22.973 "trsvcid": "55516" 00:19:22.973 }, 00:19:22.973 "auth": { 00:19:22.973 "state": "completed", 00:19:22.973 "digest": "sha256", 00:19:22.973 "dhgroup": "ffdhe4096" 00:19:22.973 } 00:19:22.973 } 00:19:22.973 ]' 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.973 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.231 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:23.231 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:24.164 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.422 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.423 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.423 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.423 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.423 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.423 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.423 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.681 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.939 00:19:24.939 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.939 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.939 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.197 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.197 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.197 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.197 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.197 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.197 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.197 { 00:19:25.197 "cntlid": 31, 00:19:25.197 "qid": 0, 00:19:25.197 "state": "enabled", 00:19:25.197 "thread": "nvmf_tgt_poll_group_000", 00:19:25.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.197 "listen_address": { 00:19:25.197 "trtype": "TCP", 00:19:25.197 "adrfam": "IPv4", 00:19:25.197 "traddr": "10.0.0.2", 00:19:25.197 "trsvcid": "4420" 00:19:25.197 }, 00:19:25.197 "peer_address": { 00:19:25.197 "trtype": "TCP", 00:19:25.197 "adrfam": "IPv4", 00:19:25.197 "traddr": "10.0.0.1", 00:19:25.197 "trsvcid": "55544" 00:19:25.197 }, 00:19:25.197 "auth": { 00:19:25.197 "state": "completed", 00:19:25.197 "digest": "sha256", 00:19:25.197 "dhgroup": "ffdhe4096" 00:19:25.197 } 00:19:25.197 } 00:19:25.197 ]' 00:19:25.197 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.197 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.197 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.455 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.455 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.455 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.455 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.455 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.713 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:25.713 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:26.647 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.905 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.469 00:19:27.469 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.469 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.469 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.727 { 00:19:27.727 "cntlid": 33, 00:19:27.727 "qid": 0, 00:19:27.727 "state": "enabled", 00:19:27.727 "thread": "nvmf_tgt_poll_group_000", 00:19:27.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.727 "listen_address": { 00:19:27.727 "trtype": "TCP", 00:19:27.727 "adrfam": "IPv4", 00:19:27.727 "traddr": "10.0.0.2", 00:19:27.727 "trsvcid": "4420" 00:19:27.727 }, 00:19:27.727 "peer_address": { 00:19:27.727 "trtype": "TCP", 00:19:27.727 "adrfam": "IPv4", 00:19:27.727 "traddr": "10.0.0.1", 00:19:27.727 "trsvcid": "55564" 00:19:27.727 }, 00:19:27.727 "auth": { 00:19:27.727 "state": "completed", 00:19:27.727 "digest": "sha256", 00:19:27.727 "dhgroup": "ffdhe6144" 00:19:27.727 } 00:19:27.727 } 00:19:27.727 ]' 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.727 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.985 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.985 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.985 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.243 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:28.243 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:29.177 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.177 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.177 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.177 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.177 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.177 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.177 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.177 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.435 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.001 00:19:30.001 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.001 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.001 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.259 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.259 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.259 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.259 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.259 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.259 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.259 { 00:19:30.259 "cntlid": 35, 00:19:30.259 "qid": 0, 00:19:30.259 "state": "enabled", 00:19:30.259 "thread": "nvmf_tgt_poll_group_000", 00:19:30.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.259 "listen_address": { 00:19:30.259 "trtype": "TCP", 00:19:30.259 "adrfam": "IPv4", 00:19:30.259 "traddr": "10.0.0.2", 00:19:30.259 "trsvcid": "4420" 00:19:30.259 }, 00:19:30.259 "peer_address": { 00:19:30.259 "trtype": "TCP", 00:19:30.259 "adrfam": "IPv4", 00:19:30.259 "traddr": "10.0.0.1", 00:19:30.259 "trsvcid": "56812" 00:19:30.259 }, 00:19:30.259 "auth": { 00:19:30.259 "state": "completed", 00:19:30.259 "digest": "sha256", 00:19:30.259 "dhgroup": "ffdhe6144" 00:19:30.259 } 00:19:30.259 } 00:19:30.259 ]' 00:19:30.259 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.259 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.259 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.259 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.259 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.517 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.517 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.517 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.775 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:30.775 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:31.708 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.708 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.708 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.708 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.708 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.708 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.708 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.708 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.966 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.532 00:19:32.532 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.532 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.532 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.790 { 00:19:32.790 "cntlid": 37, 00:19:32.790 "qid": 0, 00:19:32.790 "state": "enabled", 00:19:32.790 "thread": "nvmf_tgt_poll_group_000", 00:19:32.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.790 "listen_address": { 00:19:32.790 "trtype": "TCP", 00:19:32.790 "adrfam": "IPv4", 00:19:32.790 "traddr": "10.0.0.2", 00:19:32.790 "trsvcid": "4420" 00:19:32.790 }, 00:19:32.790 "peer_address": { 00:19:32.790 "trtype": "TCP", 00:19:32.790 "adrfam": "IPv4", 00:19:32.790 "traddr": "10.0.0.1", 00:19:32.790 "trsvcid": "56842" 00:19:32.790 }, 00:19:32.790 "auth": { 00:19:32.790 "state": "completed", 00:19:32.790 "digest": "sha256", 00:19:32.790 "dhgroup": "ffdhe6144" 00:19:32.790 } 00:19:32.790 } 00:19:32.790 ]' 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.790 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.356 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:33.356 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:34.290 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.290 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.290 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.290 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.290 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.290 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.290 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.548 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.549 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.115 00:19:35.115 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.115 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.115 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.375 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.375 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.375 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.375 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.375 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.375 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.375 { 00:19:35.375 "cntlid": 39, 00:19:35.376 "qid": 0, 00:19:35.376 "state": "enabled", 00:19:35.376 "thread": "nvmf_tgt_poll_group_000", 00:19:35.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.376 "listen_address": { 00:19:35.376 "trtype": "TCP", 00:19:35.376 "adrfam": "IPv4", 00:19:35.376 "traddr": "10.0.0.2", 00:19:35.376 "trsvcid": "4420" 00:19:35.376 }, 00:19:35.376 "peer_address": { 00:19:35.376 "trtype": "TCP", 00:19:35.376 "adrfam": "IPv4", 00:19:35.376 "traddr": "10.0.0.1", 00:19:35.376 "trsvcid": "56868" 00:19:35.376 }, 00:19:35.376 "auth": { 00:19:35.376 "state": "completed", 00:19:35.376 "digest": "sha256", 00:19:35.376 "dhgroup": "ffdhe6144" 00:19:35.376 } 00:19:35.376 } 00:19:35.376 ]' 00:19:35.376 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.376 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.376 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.633 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.633 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.633 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.633 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.633 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.891 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:35.891 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:36.823 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.081 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.015 00:19:38.015 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.015 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.015 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.301 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.301 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.301 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.301 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.301 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.301 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.301 { 00:19:38.301 "cntlid": 41, 00:19:38.301 "qid": 0, 00:19:38.301 "state": "enabled", 00:19:38.301 "thread": "nvmf_tgt_poll_group_000", 00:19:38.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.301 "listen_address": { 00:19:38.301 "trtype": "TCP", 00:19:38.301 "adrfam": "IPv4", 00:19:38.301 "traddr": "10.0.0.2", 00:19:38.301 "trsvcid": "4420" 00:19:38.301 }, 00:19:38.301 "peer_address": { 00:19:38.301 "trtype": "TCP", 00:19:38.301 "adrfam": "IPv4", 00:19:38.301 "traddr": "10.0.0.1", 00:19:38.302 "trsvcid": "56892" 00:19:38.302 }, 00:19:38.302 "auth": { 00:19:38.302 "state": "completed", 00:19:38.302 "digest": "sha256", 00:19:38.302 "dhgroup": "ffdhe8192" 00:19:38.302 } 00:19:38.302 } 00:19:38.302 ]' 00:19:38.302 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.302 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.302 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.302 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.302 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.302 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.302 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.302 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.628 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:38.628 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:39.562 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.562 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.562 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.562 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.562 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.562 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.562 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.562 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.128 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.063 00:19:41.063 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.063 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.063 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.321 { 00:19:41.321 "cntlid": 43, 00:19:41.321 "qid": 0, 00:19:41.321 "state": "enabled", 00:19:41.321 "thread": "nvmf_tgt_poll_group_000", 00:19:41.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:41.321 "listen_address": { 00:19:41.321 "trtype": "TCP", 00:19:41.321 "adrfam": "IPv4", 00:19:41.321 "traddr": "10.0.0.2", 00:19:41.321 "trsvcid": "4420" 00:19:41.321 }, 00:19:41.321 "peer_address": { 00:19:41.321 "trtype": "TCP", 00:19:41.321 "adrfam": "IPv4", 00:19:41.321 "traddr": "10.0.0.1", 00:19:41.321 "trsvcid": "45926" 00:19:41.321 }, 00:19:41.321 "auth": { 00:19:41.321 "state": "completed", 00:19:41.321 "digest": "sha256", 00:19:41.321 "dhgroup": "ffdhe8192" 00:19:41.321 } 00:19:41.321 } 00:19:41.321 ]' 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.321 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.321 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.321 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.321 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.321 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.321 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.579 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:41.579 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.952 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.886 00:19:43.886 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.886 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.886 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.144 { 00:19:44.144 "cntlid": 45, 00:19:44.144 "qid": 0, 00:19:44.144 "state": "enabled", 00:19:44.144 "thread": "nvmf_tgt_poll_group_000", 00:19:44.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.144 "listen_address": { 00:19:44.144 "trtype": "TCP", 00:19:44.144 "adrfam": "IPv4", 00:19:44.144 "traddr": "10.0.0.2", 00:19:44.144 "trsvcid": "4420" 00:19:44.144 }, 00:19:44.144 "peer_address": { 00:19:44.144 "trtype": "TCP", 00:19:44.144 "adrfam": "IPv4", 00:19:44.144 "traddr": "10.0.0.1", 00:19:44.144 "trsvcid": "45952" 00:19:44.144 }, 00:19:44.144 "auth": { 00:19:44.144 "state": "completed", 00:19:44.144 "digest": "sha256", 00:19:44.144 "dhgroup": "ffdhe8192" 00:19:44.144 } 00:19:44.144 } 00:19:44.144 ]' 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.144 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.402 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.402 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.402 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.402 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.402 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.660 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:44.660 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:45.593 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.593 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.593 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.593 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.593 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.593 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.593 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.593 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.851 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.784 00:19:46.784 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.784 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.784 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.042 { 00:19:47.042 "cntlid": 47, 00:19:47.042 "qid": 0, 00:19:47.042 "state": "enabled", 00:19:47.042 "thread": "nvmf_tgt_poll_group_000", 00:19:47.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.042 "listen_address": { 00:19:47.042 "trtype": "TCP", 00:19:47.042 "adrfam": "IPv4", 00:19:47.042 "traddr": "10.0.0.2", 00:19:47.042 "trsvcid": "4420" 00:19:47.042 }, 00:19:47.042 "peer_address": { 00:19:47.042 "trtype": "TCP", 00:19:47.042 "adrfam": "IPv4", 00:19:47.042 "traddr": "10.0.0.1", 00:19:47.042 "trsvcid": "45978" 00:19:47.042 }, 00:19:47.042 "auth": { 00:19:47.042 "state": "completed", 00:19:47.042 "digest": "sha256", 00:19:47.042 "dhgroup": "ffdhe8192" 00:19:47.042 } 00:19:47.042 } 00:19:47.042 ]' 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.042 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.300 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:47.300 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.674 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.931 00:19:48.931 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.931 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.931 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.189 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.189 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.189 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.189 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.189 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.189 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.189 { 00:19:49.189 "cntlid": 49, 00:19:49.189 "qid": 0, 00:19:49.189 "state": "enabled", 00:19:49.189 "thread": "nvmf_tgt_poll_group_000", 00:19:49.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.189 "listen_address": { 00:19:49.189 "trtype": "TCP", 00:19:49.189 "adrfam": "IPv4", 00:19:49.189 "traddr": "10.0.0.2", 00:19:49.189 "trsvcid": "4420" 00:19:49.189 }, 00:19:49.189 "peer_address": { 00:19:49.189 "trtype": "TCP", 00:19:49.189 "adrfam": "IPv4", 00:19:49.189 "traddr": "10.0.0.1", 00:19:49.189 "trsvcid": "35534" 00:19:49.189 }, 00:19:49.189 "auth": { 00:19:49.189 "state": "completed", 00:19:49.189 "digest": "sha384", 00:19:49.189 "dhgroup": "null" 00:19:49.189 } 00:19:49.189 } 00:19:49.189 ]' 00:19:49.189 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.447 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.447 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.447 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.447 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.447 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.447 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.447 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.705 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:49.705 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:50.638 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.638 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.639 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.639 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.639 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.639 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.639 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.639 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.897 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.898 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.464 00:19:51.464 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.464 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.464 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.464 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.464 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.464 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.464 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.722 { 00:19:51.722 "cntlid": 51, 00:19:51.722 "qid": 0, 00:19:51.722 "state": "enabled", 00:19:51.722 "thread": "nvmf_tgt_poll_group_000", 00:19:51.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.722 "listen_address": { 00:19:51.722 "trtype": "TCP", 00:19:51.722 "adrfam": "IPv4", 00:19:51.722 "traddr": "10.0.0.2", 00:19:51.722 "trsvcid": "4420" 00:19:51.722 }, 00:19:51.722 "peer_address": { 00:19:51.722 "trtype": "TCP", 00:19:51.722 "adrfam": "IPv4", 00:19:51.722 "traddr": "10.0.0.1", 00:19:51.722 "trsvcid": "35554" 00:19:51.722 }, 00:19:51.722 "auth": { 00:19:51.722 "state": "completed", 00:19:51.722 "digest": "sha384", 00:19:51.722 "dhgroup": "null" 00:19:51.722 } 00:19:51.722 } 00:19:51.722 ]' 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.722 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.980 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:51.980 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:19:52.914 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.914 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.914 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.914 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.914 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.914 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.914 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.914 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.172 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.738 00:19:53.738 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.738 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.738 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.996 { 00:19:53.996 "cntlid": 53, 00:19:53.996 "qid": 0, 00:19:53.996 "state": "enabled", 00:19:53.996 "thread": "nvmf_tgt_poll_group_000", 00:19:53.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.996 "listen_address": { 00:19:53.996 "trtype": "TCP", 00:19:53.996 "adrfam": "IPv4", 00:19:53.996 "traddr": "10.0.0.2", 00:19:53.996 "trsvcid": "4420" 00:19:53.996 }, 00:19:53.996 "peer_address": { 00:19:53.996 "trtype": "TCP", 00:19:53.996 "adrfam": "IPv4", 00:19:53.996 "traddr": "10.0.0.1", 00:19:53.996 "trsvcid": "35588" 00:19:53.996 }, 00:19:53.996 "auth": { 00:19:53.996 "state": "completed", 00:19:53.996 "digest": "sha384", 00:19:53.996 "dhgroup": "null" 00:19:53.996 } 00:19:53.996 } 00:19:53.996 ]' 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.996 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.254 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:54.254 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:19:55.187 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.187 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.187 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.187 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.187 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.187 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.187 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.187 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.752 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.010 00:19:56.010 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.010 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.010 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.267 { 00:19:56.267 "cntlid": 55, 00:19:56.267 "qid": 0, 00:19:56.267 "state": "enabled", 00:19:56.267 "thread": "nvmf_tgt_poll_group_000", 00:19:56.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.267 "listen_address": { 00:19:56.267 "trtype": "TCP", 00:19:56.267 "adrfam": "IPv4", 00:19:56.267 "traddr": "10.0.0.2", 00:19:56.267 "trsvcid": "4420" 00:19:56.267 }, 00:19:56.267 "peer_address": { 00:19:56.267 "trtype": "TCP", 00:19:56.267 "adrfam": "IPv4", 00:19:56.267 "traddr": "10.0.0.1", 00:19:56.267 "trsvcid": "35608" 00:19:56.267 }, 00:19:56.267 "auth": { 00:19:56.267 "state": "completed", 00:19:56.267 "digest": "sha384", 00:19:56.267 "dhgroup": "null" 00:19:56.267 } 00:19:56.267 } 00:19:56.267 ]' 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.267 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.525 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:56.525 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.459 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.025 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.283 00:19:58.283 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.283 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.283 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.540 { 00:19:58.540 "cntlid": 57, 00:19:58.540 "qid": 0, 00:19:58.540 "state": "enabled", 00:19:58.540 "thread": "nvmf_tgt_poll_group_000", 00:19:58.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.540 "listen_address": { 00:19:58.540 "trtype": "TCP", 00:19:58.540 "adrfam": "IPv4", 00:19:58.540 "traddr": "10.0.0.2", 00:19:58.540 "trsvcid": "4420" 00:19:58.540 }, 00:19:58.540 "peer_address": { 00:19:58.540 "trtype": "TCP", 00:19:58.540 "adrfam": "IPv4", 00:19:58.540 "traddr": "10.0.0.1", 00:19:58.540 "trsvcid": "35636" 00:19:58.540 }, 00:19:58.540 "auth": { 00:19:58.540 "state": "completed", 00:19:58.540 "digest": "sha384", 00:19:58.540 "dhgroup": "ffdhe2048" 00:19:58.540 } 00:19:58.540 } 00:19:58.540 ]' 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.540 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.541 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.541 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.541 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.541 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.541 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.798 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:19:58.798 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:00.172 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.173 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.430 00:20:00.430 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.430 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.431 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.688 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.688 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.688 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.688 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.688 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.688 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.688 { 00:20:00.688 "cntlid": 59, 00:20:00.688 "qid": 0, 00:20:00.688 "state": "enabled", 00:20:00.688 "thread": "nvmf_tgt_poll_group_000", 00:20:00.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.688 "listen_address": { 00:20:00.688 "trtype": "TCP", 00:20:00.688 "adrfam": "IPv4", 00:20:00.688 "traddr": "10.0.0.2", 00:20:00.688 "trsvcid": "4420" 00:20:00.688 }, 00:20:00.688 "peer_address": { 00:20:00.688 "trtype": "TCP", 00:20:00.688 "adrfam": "IPv4", 00:20:00.688 "traddr": "10.0.0.1", 00:20:00.688 "trsvcid": "53888" 00:20:00.688 }, 00:20:00.688 "auth": { 00:20:00.688 "state": "completed", 00:20:00.688 "digest": "sha384", 00:20:00.688 "dhgroup": "ffdhe2048" 00:20:00.688 } 00:20:00.688 } 00:20:00.688 ]' 00:20:00.688 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.946 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.946 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.946 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.946 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.946 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.946 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.946 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.204 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:01.204 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:02.137 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.137 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.137 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.137 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.137 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.137 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.137 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.137 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.395 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.653 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.653 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.653 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.653 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.911 00:20:02.911 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.911 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.911 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.169 { 00:20:03.169 "cntlid": 61, 00:20:03.169 "qid": 0, 00:20:03.169 "state": "enabled", 00:20:03.169 "thread": "nvmf_tgt_poll_group_000", 00:20:03.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.169 "listen_address": { 00:20:03.169 "trtype": "TCP", 00:20:03.169 "adrfam": "IPv4", 00:20:03.169 "traddr": "10.0.0.2", 00:20:03.169 "trsvcid": "4420" 00:20:03.169 }, 00:20:03.169 "peer_address": { 00:20:03.169 "trtype": "TCP", 00:20:03.169 "adrfam": "IPv4", 00:20:03.169 "traddr": "10.0.0.1", 00:20:03.169 "trsvcid": "53908" 00:20:03.169 }, 00:20:03.169 "auth": { 00:20:03.169 "state": "completed", 00:20:03.169 "digest": "sha384", 00:20:03.169 "dhgroup": "ffdhe2048" 00:20:03.169 } 00:20:03.169 } 00:20:03.169 ]' 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.169 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.761 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:03.761 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:04.717 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.717 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.717 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.717 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.717 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.717 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.717 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.717 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.975 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.233 00:20:05.233 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.233 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.233 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.490 { 00:20:05.490 "cntlid": 63, 00:20:05.490 "qid": 0, 00:20:05.490 "state": "enabled", 00:20:05.490 "thread": "nvmf_tgt_poll_group_000", 00:20:05.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.490 "listen_address": { 00:20:05.490 "trtype": "TCP", 00:20:05.490 "adrfam": "IPv4", 00:20:05.490 "traddr": "10.0.0.2", 00:20:05.490 "trsvcid": "4420" 00:20:05.490 }, 00:20:05.490 "peer_address": { 00:20:05.490 "trtype": "TCP", 00:20:05.490 "adrfam": "IPv4", 00:20:05.490 "traddr": "10.0.0.1", 00:20:05.490 "trsvcid": "53942" 00:20:05.490 }, 00:20:05.490 "auth": { 00:20:05.490 "state": "completed", 00:20:05.490 "digest": "sha384", 00:20:05.490 "dhgroup": "ffdhe2048" 00:20:05.490 } 00:20:05.490 } 00:20:05.490 ]' 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.490 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.055 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:06.055 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.988 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.246 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.504 00:20:07.504 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.504 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.504 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.762 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.762 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.762 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.762 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.762 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.762 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.762 { 00:20:07.762 "cntlid": 65, 00:20:07.762 "qid": 0, 00:20:07.762 "state": "enabled", 00:20:07.762 "thread": "nvmf_tgt_poll_group_000", 00:20:07.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.762 "listen_address": { 00:20:07.762 "trtype": "TCP", 00:20:07.762 "adrfam": "IPv4", 00:20:07.762 "traddr": "10.0.0.2", 00:20:07.762 "trsvcid": "4420" 00:20:07.762 }, 00:20:07.762 "peer_address": { 00:20:07.762 "trtype": "TCP", 00:20:07.762 "adrfam": "IPv4", 00:20:07.762 "traddr": "10.0.0.1", 00:20:07.762 "trsvcid": "53976" 00:20:07.762 }, 00:20:07.762 "auth": { 00:20:07.762 "state": "completed", 00:20:07.762 "digest": "sha384", 00:20:07.762 "dhgroup": "ffdhe3072" 00:20:07.762 } 00:20:07.762 } 00:20:07.762 ]' 00:20:07.762 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.020 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.020 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.020 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.020 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.020 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.020 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.020 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.278 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:08.278 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:09.210 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.211 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.211 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.211 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.211 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.211 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.211 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.211 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.469 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.035 00:20:10.035 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.035 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.035 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.293 { 00:20:10.293 "cntlid": 67, 00:20:10.293 "qid": 0, 00:20:10.293 "state": "enabled", 00:20:10.293 "thread": "nvmf_tgt_poll_group_000", 00:20:10.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.293 "listen_address": { 00:20:10.293 "trtype": "TCP", 00:20:10.293 "adrfam": "IPv4", 00:20:10.293 "traddr": "10.0.0.2", 00:20:10.293 "trsvcid": "4420" 00:20:10.293 }, 00:20:10.293 "peer_address": { 00:20:10.293 "trtype": "TCP", 00:20:10.293 "adrfam": "IPv4", 00:20:10.293 "traddr": "10.0.0.1", 00:20:10.293 "trsvcid": "52548" 00:20:10.293 }, 00:20:10.293 "auth": { 00:20:10.293 "state": "completed", 00:20:10.293 "digest": "sha384", 00:20:10.293 "dhgroup": "ffdhe3072" 00:20:10.293 } 00:20:10.293 } 00:20:10.293 ]' 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.293 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.550 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:10.550 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:11.485 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.485 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.485 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.485 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.485 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.485 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.485 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.485 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.050 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.307 00:20:12.307 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.307 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.307 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.565 { 00:20:12.565 "cntlid": 69, 00:20:12.565 "qid": 0, 00:20:12.565 "state": "enabled", 00:20:12.565 "thread": "nvmf_tgt_poll_group_000", 00:20:12.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.565 "listen_address": { 00:20:12.565 "trtype": "TCP", 00:20:12.565 "adrfam": "IPv4", 00:20:12.565 "traddr": "10.0.0.2", 00:20:12.565 "trsvcid": "4420" 00:20:12.565 }, 00:20:12.565 "peer_address": { 00:20:12.565 "trtype": "TCP", 00:20:12.565 "adrfam": "IPv4", 00:20:12.565 "traddr": "10.0.0.1", 00:20:12.565 "trsvcid": "52588" 00:20:12.565 }, 00:20:12.565 "auth": { 00:20:12.565 "state": "completed", 00:20:12.565 "digest": "sha384", 00:20:12.565 "dhgroup": "ffdhe3072" 00:20:12.565 } 00:20:12.565 } 00:20:12.565 ]' 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.565 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.823 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.823 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.823 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.080 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:13.080 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:14.013 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.013 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.013 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.013 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.013 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.013 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.013 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.013 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.271 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.530 00:20:14.530 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.530 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.530 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.787 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.787 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.787 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.787 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.787 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.787 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.787 { 00:20:14.787 "cntlid": 71, 00:20:14.787 "qid": 0, 00:20:14.787 "state": "enabled", 00:20:14.787 "thread": "nvmf_tgt_poll_group_000", 00:20:14.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.787 "listen_address": { 00:20:14.787 "trtype": "TCP", 00:20:14.787 "adrfam": "IPv4", 00:20:14.787 "traddr": "10.0.0.2", 00:20:14.787 "trsvcid": "4420" 00:20:14.787 }, 00:20:14.787 "peer_address": { 00:20:14.787 "trtype": "TCP", 00:20:14.787 "adrfam": "IPv4", 00:20:14.787 "traddr": "10.0.0.1", 00:20:14.787 "trsvcid": "52630" 00:20:14.787 }, 00:20:14.787 "auth": { 00:20:14.787 "state": "completed", 00:20:14.787 "digest": "sha384", 00:20:14.787 "dhgroup": "ffdhe3072" 00:20:14.787 } 00:20:14.787 } 00:20:14.787 ]' 00:20:14.787 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.045 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.045 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.045 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.045 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.045 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.045 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.045 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.303 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:15.303 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.236 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.494 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.060 00:20:17.060 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.060 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.060 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.318 { 00:20:17.318 "cntlid": 73, 00:20:17.318 "qid": 0, 00:20:17.318 "state": "enabled", 00:20:17.318 "thread": "nvmf_tgt_poll_group_000", 00:20:17.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.318 "listen_address": { 00:20:17.318 "trtype": "TCP", 00:20:17.318 "adrfam": "IPv4", 00:20:17.318 "traddr": "10.0.0.2", 00:20:17.318 "trsvcid": "4420" 00:20:17.318 }, 00:20:17.318 "peer_address": { 00:20:17.318 "trtype": "TCP", 00:20:17.318 "adrfam": "IPv4", 00:20:17.318 "traddr": "10.0.0.1", 00:20:17.318 "trsvcid": "52652" 00:20:17.318 }, 00:20:17.318 "auth": { 00:20:17.318 "state": "completed", 00:20:17.318 "digest": "sha384", 00:20:17.318 "dhgroup": "ffdhe4096" 00:20:17.318 } 00:20:17.318 } 00:20:17.318 ]' 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.318 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.318 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.318 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.318 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.318 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.318 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.576 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:17.576 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:18.509 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.509 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.509 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.509 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.509 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.509 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.509 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.509 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.074 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.332 00:20:19.332 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.332 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.332 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.590 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.590 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.590 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.590 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.590 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.590 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.590 { 00:20:19.590 "cntlid": 75, 00:20:19.590 "qid": 0, 00:20:19.590 "state": "enabled", 00:20:19.590 "thread": "nvmf_tgt_poll_group_000", 00:20:19.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.590 "listen_address": { 00:20:19.590 "trtype": "TCP", 00:20:19.590 "adrfam": "IPv4", 00:20:19.590 "traddr": "10.0.0.2", 00:20:19.590 "trsvcid": "4420" 00:20:19.590 }, 00:20:19.590 "peer_address": { 00:20:19.590 "trtype": "TCP", 00:20:19.590 "adrfam": "IPv4", 00:20:19.590 "traddr": "10.0.0.1", 00:20:19.590 "trsvcid": "42088" 00:20:19.590 }, 00:20:19.590 "auth": { 00:20:19.590 "state": "completed", 00:20:19.590 "digest": "sha384", 00:20:19.590 "dhgroup": "ffdhe4096" 00:20:19.590 } 00:20:19.590 } 00:20:19.590 ]' 00:20:19.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.156 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:20.156 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:21.090 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.090 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.090 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.090 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.090 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.090 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.090 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.090 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.347 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.605 00:20:21.605 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.605 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.605 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.170 { 00:20:22.170 "cntlid": 77, 00:20:22.170 "qid": 0, 00:20:22.170 "state": "enabled", 00:20:22.170 "thread": "nvmf_tgt_poll_group_000", 00:20:22.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.170 "listen_address": { 00:20:22.170 "trtype": "TCP", 00:20:22.170 "adrfam": "IPv4", 00:20:22.170 "traddr": "10.0.0.2", 00:20:22.170 "trsvcid": "4420" 00:20:22.170 }, 00:20:22.170 "peer_address": { 00:20:22.170 "trtype": "TCP", 00:20:22.170 "adrfam": "IPv4", 00:20:22.170 "traddr": "10.0.0.1", 00:20:22.170 "trsvcid": "42108" 00:20:22.170 }, 00:20:22.170 "auth": { 00:20:22.170 "state": "completed", 00:20:22.170 "digest": "sha384", 00:20:22.170 "dhgroup": "ffdhe4096" 00:20:22.170 } 00:20:22.170 } 00:20:22.170 ]' 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.170 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.428 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:22.428 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:23.362 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.362 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.362 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.362 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.362 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.362 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.362 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.362 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.620 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.186 00:20:24.186 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.186 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.186 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.444 { 00:20:24.444 "cntlid": 79, 00:20:24.444 "qid": 0, 00:20:24.444 "state": "enabled", 00:20:24.444 "thread": "nvmf_tgt_poll_group_000", 00:20:24.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.444 "listen_address": { 00:20:24.444 "trtype": "TCP", 00:20:24.444 "adrfam": "IPv4", 00:20:24.444 "traddr": "10.0.0.2", 00:20:24.444 "trsvcid": "4420" 00:20:24.444 }, 00:20:24.444 "peer_address": { 00:20:24.444 "trtype": "TCP", 00:20:24.444 "adrfam": "IPv4", 00:20:24.444 "traddr": "10.0.0.1", 00:20:24.444 "trsvcid": "42118" 00:20:24.444 }, 00:20:24.444 "auth": { 00:20:24.444 "state": "completed", 00:20:24.444 "digest": "sha384", 00:20:24.444 "dhgroup": "ffdhe4096" 00:20:24.444 } 00:20:24.444 } 00:20:24.444 ]' 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.444 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.702 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:24.702 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.636 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.202 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.768 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.768 { 00:20:26.768 "cntlid": 81, 00:20:26.768 "qid": 0, 00:20:26.768 "state": "enabled", 00:20:26.768 "thread": "nvmf_tgt_poll_group_000", 00:20:26.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.768 "listen_address": { 00:20:26.768 "trtype": "TCP", 00:20:26.768 "adrfam": "IPv4", 00:20:26.768 "traddr": "10.0.0.2", 00:20:26.768 "trsvcid": "4420" 00:20:26.768 }, 00:20:26.768 "peer_address": { 00:20:26.768 "trtype": "TCP", 00:20:26.768 "adrfam": "IPv4", 00:20:26.768 "traddr": "10.0.0.1", 00:20:26.768 "trsvcid": "42158" 00:20:26.768 }, 00:20:26.768 "auth": { 00:20:26.768 "state": "completed", 00:20:26.768 "digest": "sha384", 00:20:26.768 "dhgroup": "ffdhe6144" 00:20:26.768 } 00:20:26.768 } 00:20:26.768 ]' 00:20:26.768 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.026 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.026 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.026 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.026 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.026 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.026 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.026 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.284 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:27.284 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:28.217 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.217 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.217 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.217 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.217 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.217 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.217 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.217 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.474 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.092 00:20:29.092 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.092 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.092 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.375 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.376 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.376 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.376 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.376 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.376 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.376 { 00:20:29.376 "cntlid": 83, 00:20:29.376 "qid": 0, 00:20:29.376 "state": "enabled", 00:20:29.376 "thread": "nvmf_tgt_poll_group_000", 00:20:29.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.376 "listen_address": { 00:20:29.376 "trtype": "TCP", 00:20:29.376 "adrfam": "IPv4", 00:20:29.376 "traddr": "10.0.0.2", 00:20:29.376 "trsvcid": "4420" 00:20:29.376 }, 00:20:29.376 "peer_address": { 00:20:29.376 "trtype": "TCP", 00:20:29.376 "adrfam": "IPv4", 00:20:29.376 "traddr": "10.0.0.1", 00:20:29.376 "trsvcid": "55120" 00:20:29.376 }, 00:20:29.376 "auth": { 00:20:29.376 "state": "completed", 00:20:29.376 "digest": "sha384", 00:20:29.376 "dhgroup": "ffdhe6144" 00:20:29.376 } 00:20:29.376 } 00:20:29.376 ]' 00:20:29.376 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.634 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.634 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.634 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.634 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.634 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.634 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.634 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.892 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:29.892 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:30.826 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.826 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.826 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.826 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.084 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.084 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.084 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.084 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.342 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.909 00:20:31.909 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.909 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.909 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.166 { 00:20:32.166 "cntlid": 85, 00:20:32.166 "qid": 0, 00:20:32.166 "state": "enabled", 00:20:32.166 "thread": "nvmf_tgt_poll_group_000", 00:20:32.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.166 "listen_address": { 00:20:32.166 "trtype": "TCP", 00:20:32.166 "adrfam": "IPv4", 00:20:32.166 "traddr": "10.0.0.2", 00:20:32.166 "trsvcid": "4420" 00:20:32.166 }, 00:20:32.166 "peer_address": { 00:20:32.166 "trtype": "TCP", 00:20:32.166 "adrfam": "IPv4", 00:20:32.166 "traddr": "10.0.0.1", 00:20:32.166 "trsvcid": "55142" 00:20:32.166 }, 00:20:32.166 "auth": { 00:20:32.166 "state": "completed", 00:20:32.166 "digest": "sha384", 00:20:32.166 "dhgroup": "ffdhe6144" 00:20:32.166 } 00:20:32.166 } 00:20:32.166 ]' 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.166 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.424 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:32.424 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.797 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.730 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.730 { 00:20:34.730 "cntlid": 87, 00:20:34.730 "qid": 0, 00:20:34.730 "state": "enabled", 00:20:34.730 "thread": "nvmf_tgt_poll_group_000", 00:20:34.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.730 "listen_address": { 00:20:34.730 "trtype": "TCP", 00:20:34.730 "adrfam": "IPv4", 00:20:34.730 "traddr": "10.0.0.2", 00:20:34.730 "trsvcid": "4420" 00:20:34.730 }, 00:20:34.730 "peer_address": { 00:20:34.730 "trtype": "TCP", 00:20:34.730 "adrfam": "IPv4", 00:20:34.730 "traddr": "10.0.0.1", 00:20:34.730 "trsvcid": "55160" 00:20:34.730 }, 00:20:34.730 "auth": { 00:20:34.730 "state": "completed", 00:20:34.730 "digest": "sha384", 00:20:34.730 "dhgroup": "ffdhe6144" 00:20:34.730 } 00:20:34.730 } 00:20:34.730 ]' 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.730 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.988 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.988 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.988 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.988 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.988 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.245 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:35.245 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.179 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.437 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:36.437 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.437 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.437 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.437 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.437 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.437 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.437 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.695 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.695 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.695 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.695 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.695 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.628 00:20:37.628 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.628 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.628 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.886 { 00:20:37.886 "cntlid": 89, 00:20:37.886 "qid": 0, 00:20:37.886 "state": "enabled", 00:20:37.886 "thread": "nvmf_tgt_poll_group_000", 00:20:37.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.886 "listen_address": { 00:20:37.886 "trtype": "TCP", 00:20:37.886 "adrfam": "IPv4", 00:20:37.886 "traddr": "10.0.0.2", 00:20:37.886 "trsvcid": "4420" 00:20:37.886 }, 00:20:37.886 "peer_address": { 00:20:37.886 "trtype": "TCP", 00:20:37.886 "adrfam": "IPv4", 00:20:37.886 "traddr": "10.0.0.1", 00:20:37.886 "trsvcid": "55172" 00:20:37.886 }, 00:20:37.886 "auth": { 00:20:37.886 "state": "completed", 00:20:37.886 "digest": "sha384", 00:20:37.886 "dhgroup": "ffdhe8192" 00:20:37.886 } 00:20:37.886 } 00:20:37.886 ]' 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.886 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.143 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:38.144 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:39.516 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.516 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.516 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.516 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.516 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.516 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.516 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.516 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.516 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.455 00:20:40.455 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.455 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.455 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.713 { 00:20:40.713 "cntlid": 91, 00:20:40.713 "qid": 0, 00:20:40.713 "state": "enabled", 00:20:40.713 "thread": "nvmf_tgt_poll_group_000", 00:20:40.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.713 "listen_address": { 00:20:40.713 "trtype": "TCP", 00:20:40.713 "adrfam": "IPv4", 00:20:40.713 "traddr": "10.0.0.2", 00:20:40.713 "trsvcid": "4420" 00:20:40.713 }, 00:20:40.713 "peer_address": { 00:20:40.713 "trtype": "TCP", 00:20:40.713 "adrfam": "IPv4", 00:20:40.713 "traddr": "10.0.0.1", 00:20:40.713 "trsvcid": "50248" 00:20:40.713 }, 00:20:40.713 "auth": { 00:20:40.713 "state": "completed", 00:20:40.713 "digest": "sha384", 00:20:40.713 "dhgroup": "ffdhe8192" 00:20:40.713 } 00:20:40.713 } 00:20:40.713 ]' 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.713 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.971 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.971 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.971 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.229 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:41.229 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:42.208 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.208 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.208 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.208 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.208 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.208 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.208 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.208 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.466 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.467 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.467 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.467 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.467 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.401 00:20:43.401 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.401 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.401 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.658 { 00:20:43.658 "cntlid": 93, 00:20:43.658 "qid": 0, 00:20:43.658 "state": "enabled", 00:20:43.658 "thread": "nvmf_tgt_poll_group_000", 00:20:43.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.658 "listen_address": { 00:20:43.658 "trtype": "TCP", 00:20:43.658 "adrfam": "IPv4", 00:20:43.658 "traddr": "10.0.0.2", 00:20:43.658 "trsvcid": "4420" 00:20:43.658 }, 00:20:43.658 "peer_address": { 00:20:43.658 "trtype": "TCP", 00:20:43.658 "adrfam": "IPv4", 00:20:43.658 "traddr": "10.0.0.1", 00:20:43.658 "trsvcid": "50262" 00:20:43.658 }, 00:20:43.658 "auth": { 00:20:43.658 "state": "completed", 00:20:43.658 "digest": "sha384", 00:20:43.658 "dhgroup": "ffdhe8192" 00:20:43.658 } 00:20:43.658 } 00:20:43.658 ]' 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.658 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.916 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.916 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.916 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.174 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:44.174 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:45.107 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.107 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.107 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.107 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.107 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.107 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.107 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.107 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.365 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.299 00:20:46.299 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.299 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.299 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.557 { 00:20:46.557 "cntlid": 95, 00:20:46.557 "qid": 0, 00:20:46.557 "state": "enabled", 00:20:46.557 "thread": "nvmf_tgt_poll_group_000", 00:20:46.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:46.557 "listen_address": { 00:20:46.557 "trtype": "TCP", 00:20:46.557 "adrfam": "IPv4", 00:20:46.557 "traddr": "10.0.0.2", 00:20:46.557 "trsvcid": "4420" 00:20:46.557 }, 00:20:46.557 "peer_address": { 00:20:46.557 "trtype": "TCP", 00:20:46.557 "adrfam": "IPv4", 00:20:46.557 "traddr": "10.0.0.1", 00:20:46.557 "trsvcid": "50290" 00:20:46.557 }, 00:20:46.557 "auth": { 00:20:46.557 "state": "completed", 00:20:46.557 "digest": "sha384", 00:20:46.557 "dhgroup": "ffdhe8192" 00:20:46.557 } 00:20:46.557 } 00:20:46.557 ]' 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.557 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.815 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:46.815 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.749 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.007 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.578 00:20:48.578 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.578 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.578 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.837 { 00:20:48.837 "cntlid": 97, 00:20:48.837 "qid": 0, 00:20:48.837 "state": "enabled", 00:20:48.837 "thread": "nvmf_tgt_poll_group_000", 00:20:48.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.837 "listen_address": { 00:20:48.837 "trtype": "TCP", 00:20:48.837 "adrfam": "IPv4", 00:20:48.837 "traddr": "10.0.0.2", 00:20:48.837 "trsvcid": "4420" 00:20:48.837 }, 00:20:48.837 "peer_address": { 00:20:48.837 "trtype": "TCP", 00:20:48.837 "adrfam": "IPv4", 00:20:48.837 "traddr": "10.0.0.1", 00:20:48.837 "trsvcid": "44964" 00:20:48.837 }, 00:20:48.837 "auth": { 00:20:48.837 "state": "completed", 00:20:48.837 "digest": "sha512", 00:20:48.837 "dhgroup": "null" 00:20:48.837 } 00:20:48.837 } 00:20:48.837 ]' 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.837 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.095 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:49.095 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:50.028 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.028 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.028 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.028 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.028 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.028 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.028 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.028 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.287 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.852 00:20:50.852 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.852 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.852 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.110 { 00:20:51.110 "cntlid": 99, 00:20:51.110 "qid": 0, 00:20:51.110 "state": "enabled", 00:20:51.110 "thread": "nvmf_tgt_poll_group_000", 00:20:51.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.110 "listen_address": { 00:20:51.110 "trtype": "TCP", 00:20:51.110 "adrfam": "IPv4", 00:20:51.110 "traddr": "10.0.0.2", 00:20:51.110 "trsvcid": "4420" 00:20:51.110 }, 00:20:51.110 "peer_address": { 00:20:51.110 "trtype": "TCP", 00:20:51.110 "adrfam": "IPv4", 00:20:51.110 "traddr": "10.0.0.1", 00:20:51.110 "trsvcid": "44996" 00:20:51.110 }, 00:20:51.110 "auth": { 00:20:51.110 "state": "completed", 00:20:51.110 "digest": "sha512", 00:20:51.110 "dhgroup": "null" 00:20:51.110 } 00:20:51.110 } 00:20:51.110 ]' 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.110 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.368 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:51.368 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:20:52.302 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.302 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.302 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.302 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.302 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.302 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.302 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.302 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.868 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.126 00:20:53.126 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.126 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.126 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.384 { 00:20:53.384 "cntlid": 101, 00:20:53.384 "qid": 0, 00:20:53.384 "state": "enabled", 00:20:53.384 "thread": "nvmf_tgt_poll_group_000", 00:20:53.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.384 "listen_address": { 00:20:53.384 "trtype": "TCP", 00:20:53.384 "adrfam": "IPv4", 00:20:53.384 "traddr": "10.0.0.2", 00:20:53.384 "trsvcid": "4420" 00:20:53.384 }, 00:20:53.384 "peer_address": { 00:20:53.384 "trtype": "TCP", 00:20:53.384 "adrfam": "IPv4", 00:20:53.384 "traddr": "10.0.0.1", 00:20:53.384 "trsvcid": "45020" 00:20:53.384 }, 00:20:53.384 "auth": { 00:20:53.384 "state": "completed", 00:20:53.384 "digest": "sha512", 00:20:53.384 "dhgroup": "null" 00:20:53.384 } 00:20:53.384 } 00:20:53.384 ]' 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.384 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.950 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:53.950 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:20:54.912 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.912 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.912 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.912 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.912 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.912 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.912 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.912 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.195 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.453 00:20:55.453 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.453 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.453 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.711 { 00:20:55.711 "cntlid": 103, 00:20:55.711 "qid": 0, 00:20:55.711 "state": "enabled", 00:20:55.711 "thread": "nvmf_tgt_poll_group_000", 00:20:55.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.711 "listen_address": { 00:20:55.711 "trtype": "TCP", 00:20:55.711 "adrfam": "IPv4", 00:20:55.711 "traddr": "10.0.0.2", 00:20:55.711 "trsvcid": "4420" 00:20:55.711 }, 00:20:55.711 "peer_address": { 00:20:55.711 "trtype": "TCP", 00:20:55.711 "adrfam": "IPv4", 00:20:55.711 "traddr": "10.0.0.1", 00:20:55.711 "trsvcid": "45050" 00:20:55.711 }, 00:20:55.711 "auth": { 00:20:55.711 "state": "completed", 00:20:55.711 "digest": "sha512", 00:20:55.711 "dhgroup": "null" 00:20:55.711 } 00:20:55.711 } 00:20:55.711 ]' 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.711 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.712 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.712 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.970 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:55.970 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:20:56.903 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.161 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.161 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.161 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.161 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.161 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.161 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.161 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.161 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.419 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.676 00:20:57.677 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.677 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.677 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.934 { 00:20:57.934 "cntlid": 105, 00:20:57.934 "qid": 0, 00:20:57.934 "state": "enabled", 00:20:57.934 "thread": "nvmf_tgt_poll_group_000", 00:20:57.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.934 "listen_address": { 00:20:57.934 "trtype": "TCP", 00:20:57.934 "adrfam": "IPv4", 00:20:57.934 "traddr": "10.0.0.2", 00:20:57.934 "trsvcid": "4420" 00:20:57.934 }, 00:20:57.934 "peer_address": { 00:20:57.934 "trtype": "TCP", 00:20:57.934 "adrfam": "IPv4", 00:20:57.934 "traddr": "10.0.0.1", 00:20:57.934 "trsvcid": "45076" 00:20:57.934 }, 00:20:57.934 "auth": { 00:20:57.934 "state": "completed", 00:20:57.934 "digest": "sha512", 00:20:57.934 "dhgroup": "ffdhe2048" 00:20:57.934 } 00:20:57.934 } 00:20:57.934 ]' 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.934 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.192 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.192 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.192 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.450 19:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:58.450 19:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:20:59.383 19:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.383 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.383 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.383 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.383 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.383 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.383 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.383 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.641 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.899 00:20:59.899 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.899 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.899 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.157 { 00:21:00.157 "cntlid": 107, 00:21:00.157 "qid": 0, 00:21:00.157 "state": "enabled", 00:21:00.157 "thread": "nvmf_tgt_poll_group_000", 00:21:00.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.157 "listen_address": { 00:21:00.157 "trtype": "TCP", 00:21:00.157 "adrfam": "IPv4", 00:21:00.157 "traddr": "10.0.0.2", 00:21:00.157 "trsvcid": "4420" 00:21:00.157 }, 00:21:00.157 "peer_address": { 00:21:00.157 "trtype": "TCP", 00:21:00.157 "adrfam": "IPv4", 00:21:00.157 "traddr": "10.0.0.1", 00:21:00.157 "trsvcid": "41076" 00:21:00.157 }, 00:21:00.157 "auth": { 00:21:00.157 "state": "completed", 00:21:00.157 "digest": "sha512", 00:21:00.157 "dhgroup": "ffdhe2048" 00:21:00.157 } 00:21:00.157 } 00:21:00.157 ]' 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.157 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.416 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.416 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.416 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.416 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.416 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.674 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:00.674 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:01.607 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.607 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.607 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.607 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.607 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.608 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.608 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.608 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.865 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:01.865 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.866 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.123 00:21:02.382 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.382 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.382 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.640 { 00:21:02.640 "cntlid": 109, 00:21:02.640 "qid": 0, 00:21:02.640 "state": "enabled", 00:21:02.640 "thread": "nvmf_tgt_poll_group_000", 00:21:02.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.640 "listen_address": { 00:21:02.640 "trtype": "TCP", 00:21:02.640 "adrfam": "IPv4", 00:21:02.640 "traddr": "10.0.0.2", 00:21:02.640 "trsvcid": "4420" 00:21:02.640 }, 00:21:02.640 "peer_address": { 00:21:02.640 "trtype": "TCP", 00:21:02.640 "adrfam": "IPv4", 00:21:02.640 "traddr": "10.0.0.1", 00:21:02.640 "trsvcid": "41100" 00:21:02.640 }, 00:21:02.640 "auth": { 00:21:02.640 "state": "completed", 00:21:02.640 "digest": "sha512", 00:21:02.640 "dhgroup": "ffdhe2048" 00:21:02.640 } 00:21:02.640 } 00:21:02.640 ]' 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.640 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.898 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:02.898 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:03.831 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.831 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.831 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.831 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.831 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.831 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.831 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.831 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.397 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.655 00:21:04.655 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.655 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.655 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.913 { 00:21:04.913 "cntlid": 111, 00:21:04.913 "qid": 0, 00:21:04.913 "state": "enabled", 00:21:04.913 "thread": "nvmf_tgt_poll_group_000", 00:21:04.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.913 "listen_address": { 00:21:04.913 "trtype": "TCP", 00:21:04.913 "adrfam": "IPv4", 00:21:04.913 "traddr": "10.0.0.2", 00:21:04.913 "trsvcid": "4420" 00:21:04.913 }, 00:21:04.913 "peer_address": { 00:21:04.913 "trtype": "TCP", 00:21:04.913 "adrfam": "IPv4", 00:21:04.913 "traddr": "10.0.0.1", 00:21:04.913 "trsvcid": "41112" 00:21:04.913 }, 00:21:04.913 "auth": { 00:21:04.913 "state": "completed", 00:21:04.913 "digest": "sha512", 00:21:04.913 "dhgroup": "ffdhe2048" 00:21:04.913 } 00:21:04.913 } 00:21:04.913 ]' 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.913 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.171 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:05.171 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.545 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.545 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.803 00:21:06.803 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.803 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.803 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.061 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.061 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.061 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.061 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.319 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.319 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.319 { 00:21:07.319 "cntlid": 113, 00:21:07.319 "qid": 0, 00:21:07.319 "state": "enabled", 00:21:07.319 "thread": "nvmf_tgt_poll_group_000", 00:21:07.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.319 "listen_address": { 00:21:07.319 "trtype": "TCP", 00:21:07.319 "adrfam": "IPv4", 00:21:07.319 "traddr": "10.0.0.2", 00:21:07.319 "trsvcid": "4420" 00:21:07.319 }, 00:21:07.319 "peer_address": { 00:21:07.319 "trtype": "TCP", 00:21:07.319 "adrfam": "IPv4", 00:21:07.319 "traddr": "10.0.0.1", 00:21:07.319 "trsvcid": "41144" 00:21:07.319 }, 00:21:07.319 "auth": { 00:21:07.319 "state": "completed", 00:21:07.319 "digest": "sha512", 00:21:07.319 "dhgroup": "ffdhe3072" 00:21:07.319 } 00:21:07.319 } 00:21:07.319 ]' 00:21:07.319 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.319 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.319 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.319 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.319 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.319 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.319 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.319 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.578 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:07.578 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:08.511 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.511 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.511 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.511 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.511 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.511 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.511 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.511 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.075 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.333 00:21:09.333 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.333 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.333 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.592 { 00:21:09.592 "cntlid": 115, 00:21:09.592 "qid": 0, 00:21:09.592 "state": "enabled", 00:21:09.592 "thread": "nvmf_tgt_poll_group_000", 00:21:09.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.592 "listen_address": { 00:21:09.592 "trtype": "TCP", 00:21:09.592 "adrfam": "IPv4", 00:21:09.592 "traddr": "10.0.0.2", 00:21:09.592 "trsvcid": "4420" 00:21:09.592 }, 00:21:09.592 "peer_address": { 00:21:09.592 "trtype": "TCP", 00:21:09.592 "adrfam": "IPv4", 00:21:09.592 "traddr": "10.0.0.1", 00:21:09.592 "trsvcid": "40286" 00:21:09.592 }, 00:21:09.592 "auth": { 00:21:09.592 "state": "completed", 00:21:09.592 "digest": "sha512", 00:21:09.592 "dhgroup": "ffdhe3072" 00:21:09.592 } 00:21:09.592 } 00:21:09.592 ]' 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.592 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.157 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:10.157 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:11.089 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.089 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.089 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.089 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.089 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.089 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.089 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.089 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.347 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.605 00:21:11.605 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.605 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.605 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.862 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.862 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.862 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.862 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.120 { 00:21:12.120 "cntlid": 117, 00:21:12.120 "qid": 0, 00:21:12.120 "state": "enabled", 00:21:12.120 "thread": "nvmf_tgt_poll_group_000", 00:21:12.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.120 "listen_address": { 00:21:12.120 "trtype": "TCP", 00:21:12.120 "adrfam": "IPv4", 00:21:12.120 "traddr": "10.0.0.2", 00:21:12.120 "trsvcid": "4420" 00:21:12.120 }, 00:21:12.120 "peer_address": { 00:21:12.120 "trtype": "TCP", 00:21:12.120 "adrfam": "IPv4", 00:21:12.120 "traddr": "10.0.0.1", 00:21:12.120 "trsvcid": "40316" 00:21:12.120 }, 00:21:12.120 "auth": { 00:21:12.120 "state": "completed", 00:21:12.120 "digest": "sha512", 00:21:12.120 "dhgroup": "ffdhe3072" 00:21:12.120 } 00:21:12.120 } 00:21:12.120 ]' 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.120 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.378 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:12.378 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:13.311 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.311 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.311 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.311 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.311 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.311 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.311 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.311 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.569 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.134 00:21:14.134 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.134 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.134 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.393 { 00:21:14.393 "cntlid": 119, 00:21:14.393 "qid": 0, 00:21:14.393 "state": "enabled", 00:21:14.393 "thread": "nvmf_tgt_poll_group_000", 00:21:14.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.393 "listen_address": { 00:21:14.393 "trtype": "TCP", 00:21:14.393 "adrfam": "IPv4", 00:21:14.393 "traddr": "10.0.0.2", 00:21:14.393 "trsvcid": "4420" 00:21:14.393 }, 00:21:14.393 "peer_address": { 00:21:14.393 "trtype": "TCP", 00:21:14.393 "adrfam": "IPv4", 00:21:14.393 "traddr": "10.0.0.1", 00:21:14.393 "trsvcid": "40338" 00:21:14.393 }, 00:21:14.393 "auth": { 00:21:14.393 "state": "completed", 00:21:14.393 "digest": "sha512", 00:21:14.393 "dhgroup": "ffdhe3072" 00:21:14.393 } 00:21:14.393 } 00:21:14.393 ]' 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.393 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.651 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:14.651 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.583 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.148 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.406 00:21:16.406 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.406 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.406 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.665 { 00:21:16.665 "cntlid": 121, 00:21:16.665 "qid": 0, 00:21:16.665 "state": "enabled", 00:21:16.665 "thread": "nvmf_tgt_poll_group_000", 00:21:16.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.665 "listen_address": { 00:21:16.665 "trtype": "TCP", 00:21:16.665 "adrfam": "IPv4", 00:21:16.665 "traddr": "10.0.0.2", 00:21:16.665 "trsvcid": "4420" 00:21:16.665 }, 00:21:16.665 "peer_address": { 00:21:16.665 "trtype": "TCP", 00:21:16.665 "adrfam": "IPv4", 00:21:16.665 "traddr": "10.0.0.1", 00:21:16.665 "trsvcid": "40352" 00:21:16.665 }, 00:21:16.665 "auth": { 00:21:16.665 "state": "completed", 00:21:16.665 "digest": "sha512", 00:21:16.665 "dhgroup": "ffdhe4096" 00:21:16.665 } 00:21:16.665 } 00:21:16.665 ]' 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.665 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.923 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.923 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.923 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.180 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:17.180 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:18.112 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.112 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.112 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.112 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.112 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.112 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.112 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.112 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.369 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.933 00:21:18.933 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.933 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.933 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.191 { 00:21:19.191 "cntlid": 123, 00:21:19.191 "qid": 0, 00:21:19.191 "state": "enabled", 00:21:19.191 "thread": "nvmf_tgt_poll_group_000", 00:21:19.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.191 "listen_address": { 00:21:19.191 "trtype": "TCP", 00:21:19.191 "adrfam": "IPv4", 00:21:19.191 "traddr": "10.0.0.2", 00:21:19.191 "trsvcid": "4420" 00:21:19.191 }, 00:21:19.191 "peer_address": { 00:21:19.191 "trtype": "TCP", 00:21:19.191 "adrfam": "IPv4", 00:21:19.191 "traddr": "10.0.0.1", 00:21:19.191 "trsvcid": "34934" 00:21:19.191 }, 00:21:19.191 "auth": { 00:21:19.191 "state": "completed", 00:21:19.191 "digest": "sha512", 00:21:19.191 "dhgroup": "ffdhe4096" 00:21:19.191 } 00:21:19.191 } 00:21:19.191 ]' 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.191 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.452 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:19.452 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:20.449 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.449 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.449 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.449 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.449 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.449 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.449 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.449 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.707 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.965 00:21:21.230 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.230 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.230 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.486 { 00:21:21.486 "cntlid": 125, 00:21:21.486 "qid": 0, 00:21:21.486 "state": "enabled", 00:21:21.486 "thread": "nvmf_tgt_poll_group_000", 00:21:21.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.486 "listen_address": { 00:21:21.486 "trtype": "TCP", 00:21:21.486 "adrfam": "IPv4", 00:21:21.486 "traddr": "10.0.0.2", 00:21:21.486 "trsvcid": "4420" 00:21:21.486 }, 00:21:21.486 "peer_address": { 00:21:21.486 "trtype": "TCP", 00:21:21.486 "adrfam": "IPv4", 00:21:21.486 "traddr": "10.0.0.1", 00:21:21.486 "trsvcid": "34966" 00:21:21.486 }, 00:21:21.486 "auth": { 00:21:21.486 "state": "completed", 00:21:21.486 "digest": "sha512", 00:21:21.486 "dhgroup": "ffdhe4096" 00:21:21.486 } 00:21:21.486 } 00:21:21.486 ]' 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.486 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.743 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:21.743 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:22.677 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.677 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.677 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.677 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.677 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.677 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.677 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.677 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.934 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.935 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.935 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.935 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.935 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.501 00:21:23.501 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.501 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.501 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.757 { 00:21:23.757 "cntlid": 127, 00:21:23.757 "qid": 0, 00:21:23.757 "state": "enabled", 00:21:23.757 "thread": "nvmf_tgt_poll_group_000", 00:21:23.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.757 "listen_address": { 00:21:23.757 "trtype": "TCP", 00:21:23.757 "adrfam": "IPv4", 00:21:23.757 "traddr": "10.0.0.2", 00:21:23.757 "trsvcid": "4420" 00:21:23.757 }, 00:21:23.757 "peer_address": { 00:21:23.757 "trtype": "TCP", 00:21:23.757 "adrfam": "IPv4", 00:21:23.757 "traddr": "10.0.0.1", 00:21:23.757 "trsvcid": "34996" 00:21:23.757 }, 00:21:23.757 "auth": { 00:21:23.757 "state": "completed", 00:21:23.757 "digest": "sha512", 00:21:23.757 "dhgroup": "ffdhe4096" 00:21:23.757 } 00:21:23.757 } 00:21:23.757 ]' 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.757 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.015 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:24.015 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:24.946 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.203 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.203 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.203 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.203 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.203 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.203 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.203 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.203 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.461 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.026 00:21:26.026 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.026 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.026 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.284 { 00:21:26.284 "cntlid": 129, 00:21:26.284 "qid": 0, 00:21:26.284 "state": "enabled", 00:21:26.284 "thread": "nvmf_tgt_poll_group_000", 00:21:26.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.284 "listen_address": { 00:21:26.284 "trtype": "TCP", 00:21:26.284 "adrfam": "IPv4", 00:21:26.284 "traddr": "10.0.0.2", 00:21:26.284 "trsvcid": "4420" 00:21:26.284 }, 00:21:26.284 "peer_address": { 00:21:26.284 "trtype": "TCP", 00:21:26.284 "adrfam": "IPv4", 00:21:26.284 "traddr": "10.0.0.1", 00:21:26.284 "trsvcid": "35022" 00:21:26.284 }, 00:21:26.284 "auth": { 00:21:26.284 "state": "completed", 00:21:26.284 "digest": "sha512", 00:21:26.284 "dhgroup": "ffdhe6144" 00:21:26.284 } 00:21:26.284 } 00:21:26.284 ]' 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.284 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.284 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.284 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.284 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.284 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.284 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.542 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:26.542 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:27.474 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.474 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.474 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.474 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.474 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.474 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.474 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.474 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.039 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.625 00:21:28.625 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.625 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.625 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.883 { 00:21:28.883 "cntlid": 131, 00:21:28.883 "qid": 0, 00:21:28.883 "state": "enabled", 00:21:28.883 "thread": "nvmf_tgt_poll_group_000", 00:21:28.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.883 "listen_address": { 00:21:28.883 "trtype": "TCP", 00:21:28.883 "adrfam": "IPv4", 00:21:28.883 "traddr": "10.0.0.2", 00:21:28.883 "trsvcid": "4420" 00:21:28.883 }, 00:21:28.883 "peer_address": { 00:21:28.883 "trtype": "TCP", 00:21:28.883 "adrfam": "IPv4", 00:21:28.883 "traddr": "10.0.0.1", 00:21:28.883 "trsvcid": "37404" 00:21:28.883 }, 00:21:28.883 "auth": { 00:21:28.883 "state": "completed", 00:21:28.883 "digest": "sha512", 00:21:28.883 "dhgroup": "ffdhe6144" 00:21:28.883 } 00:21:28.883 } 00:21:28.883 ]' 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.883 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.141 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:29.141 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:30.074 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.074 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.074 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.074 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.074 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.074 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.074 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:30.074 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:30.332 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:30.332 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.332 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.332 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:30.332 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:30.332 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.332 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.589 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.589 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.589 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.589 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.589 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.589 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.152 00:21:31.152 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.152 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.152 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.411 { 00:21:31.411 "cntlid": 133, 00:21:31.411 "qid": 0, 00:21:31.411 "state": "enabled", 00:21:31.411 "thread": "nvmf_tgt_poll_group_000", 00:21:31.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.411 "listen_address": { 00:21:31.411 "trtype": "TCP", 00:21:31.411 "adrfam": "IPv4", 00:21:31.411 "traddr": "10.0.0.2", 00:21:31.411 "trsvcid": "4420" 00:21:31.411 }, 00:21:31.411 "peer_address": { 00:21:31.411 "trtype": "TCP", 00:21:31.411 "adrfam": "IPv4", 00:21:31.411 "traddr": "10.0.0.1", 00:21:31.411 "trsvcid": "37428" 00:21:31.411 }, 00:21:31.411 "auth": { 00:21:31.411 "state": "completed", 00:21:31.411 "digest": "sha512", 00:21:31.411 "dhgroup": "ffdhe6144" 00:21:31.411 } 00:21:31.411 } 00:21:31.411 ]' 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.411 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.668 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:31.668 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:32.603 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.603 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.603 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.603 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.603 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.603 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.603 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.603 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.886 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.453 00:21:33.453 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.453 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.453 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.710 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.710 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.710 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.710 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.968 { 00:21:33.968 "cntlid": 135, 00:21:33.968 "qid": 0, 00:21:33.968 "state": "enabled", 00:21:33.968 "thread": "nvmf_tgt_poll_group_000", 00:21:33.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.968 "listen_address": { 00:21:33.968 "trtype": "TCP", 00:21:33.968 "adrfam": "IPv4", 00:21:33.968 "traddr": "10.0.0.2", 00:21:33.968 "trsvcid": "4420" 00:21:33.968 }, 00:21:33.968 "peer_address": { 00:21:33.968 "trtype": "TCP", 00:21:33.968 "adrfam": "IPv4", 00:21:33.968 "traddr": "10.0.0.1", 00:21:33.968 "trsvcid": "37446" 00:21:33.968 }, 00:21:33.968 "auth": { 00:21:33.968 "state": "completed", 00:21:33.968 "digest": "sha512", 00:21:33.968 "dhgroup": "ffdhe6144" 00:21:33.968 } 00:21:33.968 } 00:21:33.968 ]' 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.968 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.227 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:34.227 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.158 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.415 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:35.416 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.416 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.416 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:35.416 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.416 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.416 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.416 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.673 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.673 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.673 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.673 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.605 00:21:36.605 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.605 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.605 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.862 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.863 { 00:21:36.863 "cntlid": 137, 00:21:36.863 "qid": 0, 00:21:36.863 "state": "enabled", 00:21:36.863 "thread": "nvmf_tgt_poll_group_000", 00:21:36.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.863 "listen_address": { 00:21:36.863 "trtype": "TCP", 00:21:36.863 "adrfam": "IPv4", 00:21:36.863 "traddr": "10.0.0.2", 00:21:36.863 "trsvcid": "4420" 00:21:36.863 }, 00:21:36.863 "peer_address": { 00:21:36.863 "trtype": "TCP", 00:21:36.863 "adrfam": "IPv4", 00:21:36.863 "traddr": "10.0.0.1", 00:21:36.863 "trsvcid": "37468" 00:21:36.863 }, 00:21:36.863 "auth": { 00:21:36.863 "state": "completed", 00:21:36.863 "digest": "sha512", 00:21:36.863 "dhgroup": "ffdhe8192" 00:21:36.863 } 00:21:36.863 } 00:21:36.863 ]' 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.863 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.120 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:37.120 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:38.052 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.052 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.052 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.052 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.052 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.052 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.052 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.052 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.311 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.244 00:21:39.244 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.244 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.244 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.502 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.502 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.502 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.502 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.502 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.502 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.502 { 00:21:39.502 "cntlid": 139, 00:21:39.502 "qid": 0, 00:21:39.502 "state": "enabled", 00:21:39.502 "thread": "nvmf_tgt_poll_group_000", 00:21:39.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.502 "listen_address": { 00:21:39.502 "trtype": "TCP", 00:21:39.502 "adrfam": "IPv4", 00:21:39.502 "traddr": "10.0.0.2", 00:21:39.502 "trsvcid": "4420" 00:21:39.502 }, 00:21:39.502 "peer_address": { 00:21:39.502 "trtype": "TCP", 00:21:39.502 "adrfam": "IPv4", 00:21:39.502 "traddr": "10.0.0.1", 00:21:39.502 "trsvcid": "55048" 00:21:39.502 }, 00:21:39.502 "auth": { 00:21:39.502 "state": "completed", 00:21:39.502 "digest": "sha512", 00:21:39.502 "dhgroup": "ffdhe8192" 00:21:39.502 } 00:21:39.502 } 00:21:39.502 ]' 00:21:39.502 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.759 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.759 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.759 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.759 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.759 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.759 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.759 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.017 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:40.017 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: --dhchap-ctrl-secret DHHC-1:02:MGUxMDI4NTRlNmNkMDczZDUwYmQxMGJkYTdhM2EwNzg4NjFkMDE1ODc5OGRmNWM0UjSUtA==: 00:21:40.950 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.950 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.950 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.951 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.951 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.951 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.951 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.951 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.209 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.141 00:21:42.142 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.142 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.142 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.399 { 00:21:42.399 "cntlid": 141, 00:21:42.399 "qid": 0, 00:21:42.399 "state": "enabled", 00:21:42.399 "thread": "nvmf_tgt_poll_group_000", 00:21:42.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.399 "listen_address": { 00:21:42.399 "trtype": "TCP", 00:21:42.399 "adrfam": "IPv4", 00:21:42.399 "traddr": "10.0.0.2", 00:21:42.399 "trsvcid": "4420" 00:21:42.399 }, 00:21:42.399 "peer_address": { 00:21:42.399 "trtype": "TCP", 00:21:42.399 "adrfam": "IPv4", 00:21:42.399 "traddr": "10.0.0.1", 00:21:42.399 "trsvcid": "55064" 00:21:42.399 }, 00:21:42.399 "auth": { 00:21:42.399 "state": "completed", 00:21:42.399 "digest": "sha512", 00:21:42.399 "dhgroup": "ffdhe8192" 00:21:42.399 } 00:21:42.399 } 00:21:42.399 ]' 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.399 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.656 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.656 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.656 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.656 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.656 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.914 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:42.914 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:01:ZDdhZmViM2ZlMWJlMmY0OThjMDI1MDE0NmMwNzA0NmZNOPyr: 00:21:43.847 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.847 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.847 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.847 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.847 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.847 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.847 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.847 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.104 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.082 00:21:45.082 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.082 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.082 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.359 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.359 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.359 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.359 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.359 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.359 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.359 { 00:21:45.359 "cntlid": 143, 00:21:45.359 "qid": 0, 00:21:45.359 "state": "enabled", 00:21:45.359 "thread": "nvmf_tgt_poll_group_000", 00:21:45.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.359 "listen_address": { 00:21:45.359 "trtype": "TCP", 00:21:45.359 "adrfam": "IPv4", 00:21:45.359 "traddr": "10.0.0.2", 00:21:45.359 "trsvcid": "4420" 00:21:45.359 }, 00:21:45.359 "peer_address": { 00:21:45.359 "trtype": "TCP", 00:21:45.359 "adrfam": "IPv4", 00:21:45.359 "traddr": "10.0.0.1", 00:21:45.359 "trsvcid": "55100" 00:21:45.359 }, 00:21:45.359 "auth": { 00:21:45.359 "state": "completed", 00:21:45.359 "digest": "sha512", 00:21:45.359 "dhgroup": "ffdhe8192" 00:21:45.359 } 00:21:45.359 } 00:21:45.359 ]' 00:21:45.359 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.359 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.359 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.359 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.359 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.359 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.359 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.359 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.617 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:45.617 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.551 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.116 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.049 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.049 { 00:21:48.049 "cntlid": 145, 00:21:48.049 "qid": 0, 00:21:48.049 "state": "enabled", 00:21:48.049 "thread": "nvmf_tgt_poll_group_000", 00:21:48.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.049 "listen_address": { 00:21:48.049 "trtype": "TCP", 00:21:48.049 "adrfam": "IPv4", 00:21:48.049 "traddr": "10.0.0.2", 00:21:48.049 "trsvcid": "4420" 00:21:48.049 }, 00:21:48.049 "peer_address": { 00:21:48.049 "trtype": "TCP", 00:21:48.049 "adrfam": "IPv4", 00:21:48.049 "traddr": "10.0.0.1", 00:21:48.049 "trsvcid": "55134" 00:21:48.049 }, 00:21:48.049 "auth": { 00:21:48.049 "state": "completed", 00:21:48.049 "digest": "sha512", 00:21:48.049 "dhgroup": "ffdhe8192" 00:21:48.049 } 00:21:48.049 } 00:21:48.049 ]' 00:21:48.049 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.305 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.305 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.305 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.305 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.305 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.305 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.305 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.562 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:48.562 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YTM3NmNiNzM0OGEzZjQ3ZTUzNmU3YTRjMThkNWZhYmVhNzVlMjliZjRjNWNhMjhk+9YOeQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk2MTc2MzIxZjdmYjgxMTYyNWMxYzg1Y2E1ZWU0MTEyYjM1MzFmMDY1NTU1ZDI1YzBjM2NiZTQ5N2IzMjE4ZOkSQIA=: 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:49.498 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:50.450 request: 00:21:50.450 { 00:21:50.450 "name": "nvme0", 00:21:50.450 "trtype": "tcp", 00:21:50.450 "traddr": "10.0.0.2", 00:21:50.450 "adrfam": "ipv4", 00:21:50.450 "trsvcid": "4420", 00:21:50.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.450 "prchk_reftag": false, 00:21:50.450 "prchk_guard": false, 00:21:50.450 "hdgst": false, 00:21:50.450 "ddgst": false, 00:21:50.450 "dhchap_key": "key2", 00:21:50.450 "allow_unrecognized_csi": false, 00:21:50.450 "method": "bdev_nvme_attach_controller", 00:21:50.450 "req_id": 1 00:21:50.450 } 00:21:50.450 Got JSON-RPC error response 00:21:50.450 response: 00:21:50.450 { 00:21:50.450 "code": -5, 00:21:50.450 "message": "Input/output error" 00:21:50.450 } 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:50.450 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:51.382 request: 00:21:51.382 { 00:21:51.382 "name": "nvme0", 00:21:51.382 "trtype": "tcp", 00:21:51.382 "traddr": "10.0.0.2", 00:21:51.382 "adrfam": "ipv4", 00:21:51.382 "trsvcid": "4420", 00:21:51.382 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.382 "prchk_reftag": false, 00:21:51.382 "prchk_guard": false, 00:21:51.382 "hdgst": false, 00:21:51.382 "ddgst": false, 00:21:51.382 "dhchap_key": "key1", 00:21:51.382 "dhchap_ctrlr_key": "ckey2", 00:21:51.382 "allow_unrecognized_csi": false, 00:21:51.382 "method": "bdev_nvme_attach_controller", 00:21:51.382 "req_id": 1 00:21:51.382 } 00:21:51.382 Got JSON-RPC error response 00:21:51.382 response: 00:21:51.382 { 00:21:51.382 "code": -5, 00:21:51.382 "message": "Input/output error" 00:21:51.382 } 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.382 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.314 request: 00:21:52.314 { 00:21:52.314 "name": "nvme0", 00:21:52.314 "trtype": "tcp", 00:21:52.314 "traddr": "10.0.0.2", 00:21:52.314 "adrfam": "ipv4", 00:21:52.314 "trsvcid": "4420", 00:21:52.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:52.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.314 "prchk_reftag": false, 00:21:52.314 "prchk_guard": false, 00:21:52.314 "hdgst": false, 00:21:52.314 "ddgst": false, 00:21:52.314 "dhchap_key": "key1", 00:21:52.314 "dhchap_ctrlr_key": "ckey1", 00:21:52.314 "allow_unrecognized_csi": false, 00:21:52.314 "method": "bdev_nvme_attach_controller", 00:21:52.314 "req_id": 1 00:21:52.314 } 00:21:52.314 Got JSON-RPC error response 00:21:52.314 response: 00:21:52.315 { 00:21:52.315 "code": -5, 00:21:52.315 "message": "Input/output error" 00:21:52.315 } 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2984007 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2984007 ']' 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2984007 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2984007 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2984007' 00:21:52.315 killing process with pid 2984007 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2984007 00:21:52.315 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2984007 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3007528 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3007528 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3007528 ']' 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.689 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3007528 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3007528 ']' 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.623 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.881 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.881 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:54.881 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:54.881 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.881 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.139 null0 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8Nx 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0VZ ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0VZ 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.bav 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.3DF ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3DF 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nb5 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.o1k ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o1k 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oWG 00:21:55.139 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.140 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.397 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.397 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.397 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.397 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.770 nvme0n1 00:21:56.770 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.770 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.770 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.028 { 00:21:57.028 "cntlid": 1, 00:21:57.028 "qid": 0, 00:21:57.028 "state": "enabled", 00:21:57.028 "thread": "nvmf_tgt_poll_group_000", 00:21:57.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.028 "listen_address": { 00:21:57.028 "trtype": "TCP", 00:21:57.028 "adrfam": "IPv4", 00:21:57.028 "traddr": "10.0.0.2", 00:21:57.028 "trsvcid": "4420" 00:21:57.028 }, 00:21:57.028 "peer_address": { 00:21:57.028 "trtype": "TCP", 00:21:57.028 "adrfam": "IPv4", 00:21:57.028 "traddr": "10.0.0.1", 00:21:57.028 "trsvcid": "45538" 00:21:57.028 }, 00:21:57.028 "auth": { 00:21:57.028 "state": "completed", 00:21:57.028 "digest": "sha512", 00:21:57.028 "dhgroup": "ffdhe8192" 00:21:57.028 } 00:21:57.028 } 00:21:57.028 ]' 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.028 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.286 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:57.286 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.917 request: 00:21:58.917 { 00:21:58.917 "name": "nvme0", 00:21:58.917 "trtype": "tcp", 00:21:58.917 "traddr": "10.0.0.2", 00:21:58.917 "adrfam": "ipv4", 00:21:58.917 "trsvcid": "4420", 00:21:58.917 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.917 "prchk_reftag": false, 00:21:58.917 "prchk_guard": false, 00:21:58.917 "hdgst": false, 00:21:58.917 "ddgst": false, 00:21:58.917 "dhchap_key": "key3", 00:21:58.917 "allow_unrecognized_csi": false, 00:21:58.917 "method": "bdev_nvme_attach_controller", 00:21:58.917 "req_id": 1 00:21:58.917 } 00:21:58.917 Got JSON-RPC error response 00:21:58.917 response: 00:21:58.917 { 00:21:58.917 "code": -5, 00:21:58.917 "message": "Input/output error" 00:21:58.917 } 00:21:58.917 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:58.917 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.917 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.917 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.917 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:58.917 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:58.917 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:58.917 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.175 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.432 request: 00:21:59.432 { 00:21:59.432 "name": "nvme0", 00:21:59.432 "trtype": "tcp", 00:21:59.432 "traddr": "10.0.0.2", 00:21:59.432 "adrfam": "ipv4", 00:21:59.432 "trsvcid": "4420", 00:21:59.432 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:59.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.432 "prchk_reftag": false, 00:21:59.432 "prchk_guard": false, 00:21:59.432 "hdgst": false, 00:21:59.432 "ddgst": false, 00:21:59.432 "dhchap_key": "key3", 00:21:59.432 "allow_unrecognized_csi": false, 00:21:59.432 "method": "bdev_nvme_attach_controller", 00:21:59.432 "req_id": 1 00:21:59.432 } 00:21:59.432 Got JSON-RPC error response 00:21:59.432 response: 00:21:59.432 { 00:21:59.432 "code": -5, 00:21:59.432 "message": "Input/output error" 00:21:59.432 } 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.690 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.948 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.948 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.948 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.948 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:59.949 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:00.513 request: 00:22:00.513 { 00:22:00.513 "name": "nvme0", 00:22:00.513 "trtype": "tcp", 00:22:00.513 "traddr": "10.0.0.2", 00:22:00.513 "adrfam": "ipv4", 00:22:00.513 "trsvcid": "4420", 00:22:00.513 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:00.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.513 "prchk_reftag": false, 00:22:00.513 "prchk_guard": false, 00:22:00.513 "hdgst": false, 00:22:00.513 "ddgst": false, 00:22:00.513 "dhchap_key": "key0", 00:22:00.513 "dhchap_ctrlr_key": "key1", 00:22:00.513 "allow_unrecognized_csi": false, 00:22:00.513 "method": "bdev_nvme_attach_controller", 00:22:00.513 "req_id": 1 00:22:00.513 } 00:22:00.513 Got JSON-RPC error response 00:22:00.513 response: 00:22:00.513 { 00:22:00.513 "code": -5, 00:22:00.513 "message": "Input/output error" 00:22:00.513 } 00:22:00.513 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:00.513 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:00.513 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:00.513 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:00.513 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:00.513 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:00.513 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:00.771 nvme0n1 00:22:00.771 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:00.771 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:00.771 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.028 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.028 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.028 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.286 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:01.286 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.286 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.286 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.286 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:01.286 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:01.286 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:03.184 nvme0n1 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:03.184 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.442 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.442 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:22:03.442 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: --dhchap-ctrl-secret DHHC-1:03:OGZlYTYxNDM2OWViZGNhMmUyZTkyMDEzYmRhZjFhZGZhZTlhODZlNWE1YjE1Mjg1MmRjYTAxNjNiMjYzZjM1YxzIup4=: 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.375 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.633 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:04.633 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.633 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:04.633 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:04.890 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.890 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:04.890 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.890 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:04.890 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:04.890 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:05.824 request: 00:22:05.824 { 00:22:05.824 "name": "nvme0", 00:22:05.824 "trtype": "tcp", 00:22:05.824 "traddr": "10.0.0.2", 00:22:05.824 "adrfam": "ipv4", 00:22:05.824 "trsvcid": "4420", 00:22:05.824 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.824 "prchk_reftag": false, 00:22:05.824 "prchk_guard": false, 00:22:05.824 "hdgst": false, 00:22:05.824 "ddgst": false, 00:22:05.824 "dhchap_key": "key1", 00:22:05.824 "allow_unrecognized_csi": false, 00:22:05.824 "method": "bdev_nvme_attach_controller", 00:22:05.824 "req_id": 1 00:22:05.824 } 00:22:05.824 Got JSON-RPC error response 00:22:05.824 response: 00:22:05.824 { 00:22:05.824 "code": -5, 00:22:05.824 "message": "Input/output error" 00:22:05.824 } 00:22:05.824 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.824 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.824 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.824 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.824 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:05.824 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:05.824 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:07.197 nvme0n1 00:22:07.197 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:07.197 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:07.197 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.455 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.455 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.455 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.713 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.713 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.713 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.713 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.713 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:07.713 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:07.713 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:07.971 nvme0n1 00:22:07.972 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:07.972 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:07.972 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.230 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.230 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.230 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: '' 2s 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: ]] 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTgwMDk1MmIxZjAwZmMyZDZhMzgzNThjOWY2ZDU0MWbBfGMg: 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:08.796 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: 2s 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: ]] 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGZjNDViMzg2ODhlODE4MGIzNzMwOTJmYjMxYzRjMjQ0OTA2OTY0NjdkZDFmZjkzPuQA+A==: 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:10.693 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:12.632 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:12.632 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:12.632 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:12.632 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:12.632 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:12.632 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:12.632 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:12.632 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.920 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.920 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.920 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.920 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.920 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:12.920 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:12.920 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:14.293 nvme0n1 00:22:14.293 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.293 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.293 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.293 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.293 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.293 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.228 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:15.228 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:15.228 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.228 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.228 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.228 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.228 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.228 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.228 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:15.228 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:15.793 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.727 request: 00:22:16.727 { 00:22:16.727 "name": "nvme0", 00:22:16.727 "dhchap_key": "key1", 00:22:16.727 "dhchap_ctrlr_key": "key3", 00:22:16.727 "method": "bdev_nvme_set_keys", 00:22:16.727 "req_id": 1 00:22:16.727 } 00:22:16.727 Got JSON-RPC error response 00:22:16.727 response: 00:22:16.727 { 00:22:16.727 "code": -13, 00:22:16.727 "message": "Permission denied" 00:22:16.727 } 00:22:16.727 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:16.727 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.727 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.727 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.727 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:16.727 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:16.727 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.985 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:16.985 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:18.368 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:18.368 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:18.368 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.368 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:18.368 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.368 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.368 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.368 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.368 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.368 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.368 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:19.742 nvme0n1 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:20.000 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:20.934 request: 00:22:20.934 { 00:22:20.934 "name": "nvme0", 00:22:20.934 "dhchap_key": "key2", 00:22:20.934 "dhchap_ctrlr_key": "key0", 00:22:20.934 "method": "bdev_nvme_set_keys", 00:22:20.934 "req_id": 1 00:22:20.934 } 00:22:20.934 Got JSON-RPC error response 00:22:20.934 response: 00:22:20.934 { 00:22:20.934 "code": -13, 00:22:20.934 "message": "Permission denied" 00:22:20.934 } 00:22:20.934 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:20.934 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.934 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.934 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.934 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:20.934 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.934 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:21.192 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:21.192 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:22.126 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:22.126 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:22.126 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.383 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:22.383 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:23.316 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:23.317 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:23.317 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2984158 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2984158 ']' 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2984158 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2984158 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2984158' 00:22:23.575 killing process with pid 2984158 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2984158 00:22:23.575 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2984158 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.105 rmmod nvme_tcp 00:22:26.105 rmmod nvme_fabrics 00:22:26.105 rmmod nvme_keyring 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3007528 ']' 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3007528 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3007528 ']' 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3007528 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3007528 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3007528' 00:22:26.105 killing process with pid 3007528 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3007528 00:22:26.105 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3007528 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:27.040 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.041 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.041 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8Nx /tmp/spdk.key-sha256.bav /tmp/spdk.key-sha384.nb5 /tmp/spdk.key-sha512.oWG /tmp/spdk.key-sha512.0VZ /tmp/spdk.key-sha384.3DF /tmp/spdk.key-sha256.o1k '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:29.573 00:22:29.573 real 3m46.871s 00:22:29.573 user 8m45.982s 00:22:29.573 sys 0m27.362s 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.573 ************************************ 00:22:29.573 END TEST nvmf_auth_target 00:22:29.573 ************************************ 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:29.573 ************************************ 00:22:29.573 START TEST nvmf_bdevio_no_huge 00:22:29.573 ************************************ 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:29.573 * Looking for test storage... 00:22:29.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:29.573 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:29.573 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:29.573 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:29.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.574 --rc genhtml_branch_coverage=1 00:22:29.574 --rc genhtml_function_coverage=1 00:22:29.574 --rc genhtml_legend=1 00:22:29.574 --rc geninfo_all_blocks=1 00:22:29.574 --rc geninfo_unexecuted_blocks=1 00:22:29.574 00:22:29.574 ' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:29.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.574 --rc genhtml_branch_coverage=1 00:22:29.574 --rc genhtml_function_coverage=1 00:22:29.574 --rc genhtml_legend=1 00:22:29.574 --rc geninfo_all_blocks=1 00:22:29.574 --rc geninfo_unexecuted_blocks=1 00:22:29.574 00:22:29.574 ' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:29.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.574 --rc genhtml_branch_coverage=1 00:22:29.574 --rc genhtml_function_coverage=1 00:22:29.574 --rc genhtml_legend=1 00:22:29.574 --rc geninfo_all_blocks=1 00:22:29.574 --rc geninfo_unexecuted_blocks=1 00:22:29.574 00:22:29.574 ' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:29.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.574 --rc genhtml_branch_coverage=1 00:22:29.574 --rc genhtml_function_coverage=1 00:22:29.574 --rc genhtml_legend=1 00:22:29.574 --rc geninfo_all_blocks=1 00:22:29.574 --rc geninfo_unexecuted_blocks=1 00:22:29.574 00:22:29.574 ' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.574 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:29.575 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:29.575 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:29.575 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:31.541 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:31.541 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:31.541 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:31.541 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.541 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:31.542 00:22:31.542 --- 10.0.0.2 ping statistics --- 00:22:31.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.542 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:22:31.542 00:22:31.542 --- 10.0.0.1 ping statistics --- 00:22:31.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.542 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=3013447 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 3013447 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3013447 ']' 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.542 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.542 [2024-10-13 19:52:21.325031] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:31.542 [2024-10-13 19:52:21.325217] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:31.804 [2024-10-13 19:52:21.492745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.062 [2024-10-13 19:52:21.645646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.062 [2024-10-13 19:52:21.645727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.062 [2024-10-13 19:52:21.645753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.062 [2024-10-13 19:52:21.645777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.062 [2024-10-13 19:52:21.645798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.062 [2024-10-13 19:52:21.647938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:32.062 [2024-10-13 19:52:21.647993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:32.062 [2024-10-13 19:52:21.648040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.062 [2024-10-13 19:52:21.648047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.644 [2024-10-13 19:52:22.367969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.644 Malloc0 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.644 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.644 [2024-10-13 19:52:22.458187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:32.902 { 00:22:32.902 "params": { 00:22:32.902 "name": "Nvme$subsystem", 00:22:32.902 "trtype": "$TEST_TRANSPORT", 00:22:32.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.902 "adrfam": "ipv4", 00:22:32.902 "trsvcid": "$NVMF_PORT", 00:22:32.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.902 "hdgst": ${hdgst:-false}, 00:22:32.902 "ddgst": ${ddgst:-false} 00:22:32.902 }, 00:22:32.902 "method": "bdev_nvme_attach_controller" 00:22:32.902 } 00:22:32.902 EOF 00:22:32.902 )") 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:22:32.902 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:32.902 "params": { 00:22:32.902 "name": "Nvme1", 00:22:32.902 "trtype": "tcp", 00:22:32.902 "traddr": "10.0.0.2", 00:22:32.902 "adrfam": "ipv4", 00:22:32.902 "trsvcid": "4420", 00:22:32.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.902 "hdgst": false, 00:22:32.902 "ddgst": false 00:22:32.902 }, 00:22:32.902 "method": "bdev_nvme_attach_controller" 00:22:32.902 }' 00:22:32.902 [2024-10-13 19:52:22.541158] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:32.902 [2024-10-13 19:52:22.541296] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3013611 ] 00:22:32.902 [2024-10-13 19:52:22.686425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:33.160 [2024-10-13 19:52:22.829838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.160 [2024-10-13 19:52:22.829880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.160 [2024-10-13 19:52:22.829890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.726 I/O targets: 00:22:33.726 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:33.726 00:22:33.726 00:22:33.726 CUnit - A unit testing framework for C - Version 2.1-3 00:22:33.726 http://cunit.sourceforge.net/ 00:22:33.726 00:22:33.726 00:22:33.726 Suite: bdevio tests on: Nvme1n1 00:22:33.726 Test: blockdev write read block ...passed 00:22:33.726 Test: blockdev write zeroes read block ...passed 00:22:33.726 Test: blockdev write zeroes read no split ...passed 00:22:33.984 Test: blockdev write zeroes read split ...passed 00:22:33.984 Test: blockdev write zeroes read split partial ...passed 00:22:33.984 Test: blockdev reset ...[2024-10-13 19:52:23.570712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.984 [2024-10-13 19:52:23.570937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:33.984 [2024-10-13 19:52:23.592316] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.984 passed 00:22:33.984 Test: blockdev write read 8 blocks ...passed 00:22:33.984 Test: blockdev write read size > 128k ...passed 00:22:33.984 Test: blockdev write read invalid size ...passed 00:22:33.984 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:33.984 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:33.984 Test: blockdev write read max offset ...passed 00:22:33.984 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:33.984 Test: blockdev writev readv 8 blocks ...passed 00:22:33.984 Test: blockdev writev readv 30 x 1block ...passed 00:22:33.984 Test: blockdev writev readv block ...passed 00:22:34.243 Test: blockdev writev readv size > 128k ...passed 00:22:34.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:34.243 Test: blockdev comparev and writev ...[2024-10-13 19:52:23.809134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.243 [2024-10-13 19:52:23.809208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.809262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.243 [2024-10-13 19:52:23.809291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.809770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.243 [2024-10-13 19:52:23.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.809841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.243 [2024-10-13 19:52:23.809866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.810330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.243 [2024-10-13 19:52:23.810363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.810412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.243 [2024-10-13 19:52:23.810439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.810901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.243 [2024-10-13 19:52:23.810933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.810965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.243 [2024-10-13 19:52:23.810991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:34.243 passed 00:22:34.243 Test: blockdev nvme passthru rw ...passed 00:22:34.243 Test: blockdev nvme passthru vendor specific ...[2024-10-13 19:52:23.892853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.243 [2024-10-13 19:52:23.892916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.893178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.243 [2024-10-13 19:52:23.893211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.893422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.243 [2024-10-13 19:52:23.893455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:34.243 [2024-10-13 19:52:23.893658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.243 [2024-10-13 19:52:23.893698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:34.243 passed 00:22:34.243 Test: blockdev nvme admin passthru ...passed 00:22:34.243 Test: blockdev copy ...passed 00:22:34.243 00:22:34.243 Run Summary: Type Total Ran Passed Failed Inactive 00:22:34.243 suites 1 1 n/a 0 0 00:22:34.243 tests 23 23 23 0 0 00:22:34.243 asserts 152 152 152 0 n/a 00:22:34.243 00:22:34.243 Elapsed time = 1.077 seconds 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.176 rmmod nvme_tcp 00:22:35.176 rmmod nvme_fabrics 00:22:35.176 rmmod nvme_keyring 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 3013447 ']' 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 3013447 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3013447 ']' 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3013447 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3013447 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3013447' 00:22:35.176 killing process with pid 3013447 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3013447 00:22:35.176 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3013447 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.112 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:38.014 00:22:38.014 real 0m8.683s 00:22:38.014 user 0m20.064s 00:22:38.014 sys 0m2.856s 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:38.014 ************************************ 00:22:38.014 END TEST nvmf_bdevio_no_huge 00:22:38.014 ************************************ 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.014 ************************************ 00:22:38.014 START TEST nvmf_tls 00:22:38.014 ************************************ 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:38.014 * Looking for test storage... 00:22:38.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:38.014 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:38.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.015 --rc genhtml_branch_coverage=1 00:22:38.015 --rc genhtml_function_coverage=1 00:22:38.015 --rc genhtml_legend=1 00:22:38.015 --rc geninfo_all_blocks=1 00:22:38.015 --rc geninfo_unexecuted_blocks=1 00:22:38.015 00:22:38.015 ' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:38.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.015 --rc genhtml_branch_coverage=1 00:22:38.015 --rc genhtml_function_coverage=1 00:22:38.015 --rc genhtml_legend=1 00:22:38.015 --rc geninfo_all_blocks=1 00:22:38.015 --rc geninfo_unexecuted_blocks=1 00:22:38.015 00:22:38.015 ' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:38.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.015 --rc genhtml_branch_coverage=1 00:22:38.015 --rc genhtml_function_coverage=1 00:22:38.015 --rc genhtml_legend=1 00:22:38.015 --rc geninfo_all_blocks=1 00:22:38.015 --rc geninfo_unexecuted_blocks=1 00:22:38.015 00:22:38.015 ' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:38.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.015 --rc genhtml_branch_coverage=1 00:22:38.015 --rc genhtml_function_coverage=1 00:22:38.015 --rc genhtml_legend=1 00:22:38.015 --rc geninfo_all_blocks=1 00:22:38.015 --rc geninfo_unexecuted_blocks=1 00:22:38.015 00:22:38.015 ' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.015 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.275 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:38.275 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:38.275 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.275 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:40.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:40.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:40.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:40.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:22:40.176 00:22:40.176 --- 10.0.0.2 ping statistics --- 00:22:40.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.176 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:22:40.176 00:22:40.176 --- 10.0.0.1 ping statistics --- 00:22:40.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.176 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:40.176 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.177 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3015869 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3015869 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3015869 ']' 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.435 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.435 [2024-10-13 19:52:30.091819] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:40.435 [2024-10-13 19:52:30.091962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.435 [2024-10-13 19:52:30.235789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.693 [2024-10-13 19:52:30.372960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.693 [2024-10-13 19:52:30.373056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.693 [2024-10-13 19:52:30.373083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.693 [2024-10-13 19:52:30.373108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.693 [2024-10-13 19:52:30.373128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.693 [2024-10-13 19:52:30.374785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:41.627 true 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.627 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:41.886 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:41.886 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:41.886 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:42.144 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.144 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:42.402 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:42.402 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:42.402 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:42.968 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.969 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:43.227 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:43.227 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:43.227 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.227 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:43.485 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:43.485 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:43.485 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:43.743 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.743 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:44.001 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:44.001 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:44.001 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:44.259 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.259 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:44.517 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9zVaQYKOCA 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.C52RPxxcZB 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9zVaQYKOCA 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.C52RPxxcZB 00:22:44.776 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:45.034 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:45.602 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9zVaQYKOCA 00:22:45.602 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9zVaQYKOCA 00:22:45.602 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.860 [2024-10-13 19:52:35.581149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.860 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:46.118 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.376 [2024-10-13 19:52:36.178854] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.376 [2024-10-13 19:52:36.179223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.634 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.892 malloc0 00:22:46.892 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:47.150 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9zVaQYKOCA 00:22:47.408 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:47.667 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9zVaQYKOCA 00:22:59.862 Initializing NVMe Controllers 00:22:59.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.862 Initialization complete. Launching workers. 00:22:59.862 ======================================================== 00:22:59.862 Latency(us) 00:22:59.862 Device Information : IOPS MiB/s Average min max 00:22:59.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5634.75 22.01 11363.00 2189.39 12776.00 00:22:59.862 ======================================================== 00:22:59.863 Total : 5634.75 22.01 11363.00 2189.39 12776.00 00:22:59.863 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9zVaQYKOCA 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9zVaQYKOCA 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3017975 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3017975 /var/tmp/bdevperf.sock 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3017975 ']' 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.863 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.863 [2024-10-13 19:52:47.694095] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:59.863 [2024-10-13 19:52:47.694228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017975 ] 00:22:59.863 [2024-10-13 19:52:47.819329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.863 [2024-10-13 19:52:47.940069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.863 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.863 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:59.863 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9zVaQYKOCA 00:22:59.863 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.863 [2024-10-13 19:52:49.225201] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.863 TLSTESTn1 00:22:59.863 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:59.863 Running I/O for 10 seconds... 00:23:01.729 2609.00 IOPS, 10.19 MiB/s [2024-10-13T17:52:52.478Z] 2670.00 IOPS, 10.43 MiB/s [2024-10-13T17:52:53.850Z] 2683.67 IOPS, 10.48 MiB/s [2024-10-13T17:52:54.783Z] 2690.75 IOPS, 10.51 MiB/s [2024-10-13T17:52:55.717Z] 2700.40 IOPS, 10.55 MiB/s [2024-10-13T17:52:56.650Z] 2706.00 IOPS, 10.57 MiB/s [2024-10-13T17:52:57.583Z] 2705.14 IOPS, 10.57 MiB/s [2024-10-13T17:52:58.521Z] 2708.50 IOPS, 10.58 MiB/s [2024-10-13T17:52:59.455Z] 2710.00 IOPS, 10.59 MiB/s [2024-10-13T17:52:59.714Z] 2710.50 IOPS, 10.59 MiB/s 00:23:09.899 Latency(us) 00:23:09.899 [2024-10-13T17:52:59.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.899 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:09.899 Verification LBA range: start 0x0 length 0x2000 00:23:09.899 TLSTESTn1 : 10.04 2711.62 10.59 0.00 0.00 47086.90 8009.96 37282.70 00:23:09.899 [2024-10-13T17:52:59.714Z] =================================================================================================================== 00:23:09.899 [2024-10-13T17:52:59.714Z] Total : 2711.62 10.59 0.00 0.00 47086.90 8009.96 37282.70 00:23:09.899 { 00:23:09.899 "results": [ 00:23:09.899 { 00:23:09.899 "job": "TLSTESTn1", 00:23:09.899 "core_mask": "0x4", 00:23:09.899 "workload": "verify", 00:23:09.899 "status": "finished", 00:23:09.899 "verify_range": { 00:23:09.899 "start": 0, 00:23:09.899 "length": 8192 00:23:09.899 }, 00:23:09.899 "queue_depth": 128, 00:23:09.899 "io_size": 4096, 00:23:09.899 "runtime": 10.042695, 00:23:09.899 "iops": 2711.622726768064, 00:23:09.899 "mibps": 10.59227627643775, 00:23:09.899 "io_failed": 0, 00:23:09.899 "io_timeout": 0, 00:23:09.899 "avg_latency_us": 47086.89634329983, 00:23:09.899 "min_latency_us": 8009.955555555555, 00:23:09.899 "max_latency_us": 37282.70222222222 00:23:09.899 } 00:23:09.899 ], 00:23:09.899 "core_count": 1 00:23:09.899 } 00:23:09.899 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:09.899 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3017975 00:23:09.899 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3017975 ']' 00:23:09.899 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3017975 00:23:09.899 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.900 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.900 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3017975 00:23:09.900 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:09.900 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:09.900 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3017975' 00:23:09.900 killing process with pid 3017975 00:23:09.900 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3017975 00:23:09.900 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.900 00:23:09.900 Latency(us) 00:23:09.900 [2024-10-13T17:52:59.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.900 [2024-10-13T17:52:59.715Z] =================================================================================================================== 00:23:09.900 [2024-10-13T17:52:59.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.900 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3017975 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C52RPxxcZB 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C52RPxxcZB 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C52RPxxcZB 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C52RPxxcZB 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3019432 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3019432 /var/tmp/bdevperf.sock 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3019432 ']' 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.834 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.834 [2024-10-13 19:53:00.450235] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:10.834 [2024-10-13 19:53:00.450376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019432 ] 00:23:10.834 [2024-10-13 19:53:00.578596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.093 [2024-10-13 19:53:00.706349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.658 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.658 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.658 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C52RPxxcZB 00:23:12.224 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.224 [2024-10-13 19:53:02.006564] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.224 [2024-10-13 19:53:02.018904] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:12.224 [2024-10-13 19:53:02.019289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:12.224 [2024-10-13 19:53:02.020267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:12.224 [2024-10-13 19:53:02.021258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.224 [2024-10-13 19:53:02.021310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:12.224 [2024-10-13 19:53:02.021331] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:12.224 [2024-10-13 19:53:02.021365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.224 request: 00:23:12.224 { 00:23:12.224 "name": "TLSTEST", 00:23:12.224 "trtype": "tcp", 00:23:12.224 "traddr": "10.0.0.2", 00:23:12.224 "adrfam": "ipv4", 00:23:12.224 "trsvcid": "4420", 00:23:12.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.224 "prchk_reftag": false, 00:23:12.224 "prchk_guard": false, 00:23:12.224 "hdgst": false, 00:23:12.225 "ddgst": false, 00:23:12.225 "psk": "key0", 00:23:12.225 "allow_unrecognized_csi": false, 00:23:12.225 "method": "bdev_nvme_attach_controller", 00:23:12.225 "req_id": 1 00:23:12.225 } 00:23:12.225 Got JSON-RPC error response 00:23:12.225 response: 00:23:12.225 { 00:23:12.225 "code": -5, 00:23:12.225 "message": "Input/output error" 00:23:12.225 } 00:23:12.225 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3019432 00:23:12.225 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3019432 ']' 00:23:12.225 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3019432 00:23:12.225 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:12.483 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.483 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3019432 00:23:12.483 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:12.483 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:12.483 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3019432' 00:23:12.483 killing process with pid 3019432 00:23:12.483 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3019432 00:23:12.483 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.483 00:23:12.483 Latency(us) 00:23:12.483 [2024-10-13T17:53:02.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.483 [2024-10-13T17:53:02.298Z] =================================================================================================================== 00:23:12.483 [2024-10-13T17:53:02.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.483 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3019432 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9zVaQYKOCA 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9zVaQYKOCA 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9zVaQYKOCA 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9zVaQYKOCA 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3019706 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3019706 /var/tmp/bdevperf.sock 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3019706 ']' 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.049 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.308 [2024-10-13 19:53:02.937735] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:13.308 [2024-10-13 19:53:02.937886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019706 ] 00:23:13.308 [2024-10-13 19:53:03.060861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.566 [2024-10-13 19:53:03.179597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.131 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.131 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:14.131 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9zVaQYKOCA 00:23:14.697 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:14.955 [2024-10-13 19:53:04.518208] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.955 [2024-10-13 19:53:04.527955] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:14.955 [2024-10-13 19:53:04.528002] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:14.955 [2024-10-13 19:53:04.528078] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:14.955 [2024-10-13 19:53:04.529085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:14.955 [2024-10-13 19:53:04.530062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:14.955 [2024-10-13 19:53:04.531056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:14.955 [2024-10-13 19:53:04.531097] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:14.955 [2024-10-13 19:53:04.531125] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:14.955 [2024-10-13 19:53:04.531155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:14.955 request: 00:23:14.955 { 00:23:14.955 "name": "TLSTEST", 00:23:14.955 "trtype": "tcp", 00:23:14.955 "traddr": "10.0.0.2", 00:23:14.955 "adrfam": "ipv4", 00:23:14.955 "trsvcid": "4420", 00:23:14.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:14.955 "prchk_reftag": false, 00:23:14.955 "prchk_guard": false, 00:23:14.955 "hdgst": false, 00:23:14.955 "ddgst": false, 00:23:14.955 "psk": "key0", 00:23:14.955 "allow_unrecognized_csi": false, 00:23:14.955 "method": "bdev_nvme_attach_controller", 00:23:14.955 "req_id": 1 00:23:14.955 } 00:23:14.955 Got JSON-RPC error response 00:23:14.955 response: 00:23:14.955 { 00:23:14.955 "code": -5, 00:23:14.955 "message": "Input/output error" 00:23:14.955 } 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3019706 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3019706 ']' 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3019706 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3019706 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3019706' 00:23:14.955 killing process with pid 3019706 00:23:14.955 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3019706 00:23:14.955 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.955 00:23:14.955 Latency(us) 00:23:14.955 [2024-10-13T17:53:04.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.955 [2024-10-13T17:53:04.770Z] =================================================================================================================== 00:23:14.955 [2024-10-13T17:53:04.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.956 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3019706 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9zVaQYKOCA 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9zVaQYKOCA 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9zVaQYKOCA 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9zVaQYKOCA 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.890 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3019988 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3019988 /var/tmp/bdevperf.sock 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3019988 ']' 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.891 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.891 [2024-10-13 19:53:05.467514] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:15.891 [2024-10-13 19:53:05.467663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019988 ] 00:23:15.891 [2024-10-13 19:53:05.600117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.149 [2024-10-13 19:53:05.724913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.715 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.715 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.715 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9zVaQYKOCA 00:23:17.281 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.539 [2024-10-13 19:53:07.099457] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.539 [2024-10-13 19:53:07.112130] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:17.539 [2024-10-13 19:53:07.112170] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:17.539 [2024-10-13 19:53:07.112237] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:17.539 [2024-10-13 19:53:07.112670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:17.539 [2024-10-13 19:53:07.113647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:17.539 [2024-10-13 19:53:07.114638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:17.539 [2024-10-13 19:53:07.114692] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:17.539 [2024-10-13 19:53:07.114728] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:17.539 [2024-10-13 19:53:07.114770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:17.539 request: 00:23:17.539 { 00:23:17.539 "name": "TLSTEST", 00:23:17.539 "trtype": "tcp", 00:23:17.539 "traddr": "10.0.0.2", 00:23:17.539 "adrfam": "ipv4", 00:23:17.539 "trsvcid": "4420", 00:23:17.539 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:17.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.539 "prchk_reftag": false, 00:23:17.539 "prchk_guard": false, 00:23:17.539 "hdgst": false, 00:23:17.540 "ddgst": false, 00:23:17.540 "psk": "key0", 00:23:17.540 "allow_unrecognized_csi": false, 00:23:17.540 "method": "bdev_nvme_attach_controller", 00:23:17.540 "req_id": 1 00:23:17.540 } 00:23:17.540 Got JSON-RPC error response 00:23:17.540 response: 00:23:17.540 { 00:23:17.540 "code": -5, 00:23:17.540 "message": "Input/output error" 00:23:17.540 } 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3019988 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3019988 ']' 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3019988 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3019988 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3019988' 00:23:17.540 killing process with pid 3019988 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3019988 00:23:17.540 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.540 00:23:17.540 Latency(us) 00:23:17.540 [2024-10-13T17:53:07.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.540 [2024-10-13T17:53:07.355Z] =================================================================================================================== 00:23:17.540 [2024-10-13T17:53:07.355Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.540 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3019988 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3020376 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3020376 /var/tmp/bdevperf.sock 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3020376 ']' 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.472 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.472 [2024-10-13 19:53:08.071159] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:18.472 [2024-10-13 19:53:08.071309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020376 ] 00:23:18.472 [2024-10-13 19:53:08.197059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.731 [2024-10-13 19:53:08.317090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.297 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.297 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:19.297 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:19.555 [2024-10-13 19:53:09.323372] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:19.555 [2024-10-13 19:53:09.323474] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:19.555 request: 00:23:19.555 { 00:23:19.555 "name": "key0", 00:23:19.555 "path": "", 00:23:19.555 "method": "keyring_file_add_key", 00:23:19.555 "req_id": 1 00:23:19.555 } 00:23:19.555 Got JSON-RPC error response 00:23:19.555 response: 00:23:19.555 { 00:23:19.555 "code": -1, 00:23:19.555 "message": "Operation not permitted" 00:23:19.555 } 00:23:19.555 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.813 [2024-10-13 19:53:09.588294] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.813 [2024-10-13 19:53:09.588403] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:19.813 request: 00:23:19.813 { 00:23:19.813 "name": "TLSTEST", 00:23:19.813 "trtype": "tcp", 00:23:19.813 "traddr": "10.0.0.2", 00:23:19.813 "adrfam": "ipv4", 00:23:19.813 "trsvcid": "4420", 00:23:19.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.813 "prchk_reftag": false, 00:23:19.813 "prchk_guard": false, 00:23:19.813 "hdgst": false, 00:23:19.813 "ddgst": false, 00:23:19.813 "psk": "key0", 00:23:19.813 "allow_unrecognized_csi": false, 00:23:19.813 "method": "bdev_nvme_attach_controller", 00:23:19.813 "req_id": 1 00:23:19.813 } 00:23:19.813 Got JSON-RPC error response 00:23:19.813 response: 00:23:19.813 { 00:23:19.813 "code": -126, 00:23:19.813 "message": "Required key not available" 00:23:19.813 } 00:23:19.813 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3020376 00:23:19.813 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3020376 ']' 00:23:19.813 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3020376 00:23:19.813 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:19.813 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.813 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3020376 00:23:20.071 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:20.071 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:20.071 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3020376' 00:23:20.071 killing process with pid 3020376 00:23:20.071 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3020376 00:23:20.071 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.071 00:23:20.071 Latency(us) 00:23:20.071 [2024-10-13T17:53:09.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.071 [2024-10-13T17:53:09.886Z] =================================================================================================================== 00:23:20.071 [2024-10-13T17:53:09.886Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.071 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3020376 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3015869 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3015869 ']' 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3015869 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3015869 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3015869' 00:23:21.004 killing process with pid 3015869 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3015869 00:23:21.004 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3015869 00:23:21.940 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.940 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.940 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:21.940 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:21.940 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:21.940 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:21.940 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.UL0368ELSH 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.UL0368ELSH 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3020802 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3020802 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3020802 ']' 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.199 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.199 [2024-10-13 19:53:11.871981] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:22.199 [2024-10-13 19:53:11.872148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.456 [2024-10-13 19:53:12.015300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.456 [2024-10-13 19:53:12.151197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.456 [2024-10-13 19:53:12.151304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.456 [2024-10-13 19:53:12.151330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.456 [2024-10-13 19:53:12.151354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.456 [2024-10-13 19:53:12.151374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.456 [2024-10-13 19:53:12.153065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.UL0368ELSH 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UL0368ELSH 00:23:23.390 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:23.390 [2024-10-13 19:53:13.114790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.390 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:23.648 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:23.905 [2024-10-13 19:53:13.668293] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.905 [2024-10-13 19:53:13.668679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.905 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:24.500 malloc0 00:23:24.500 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:24.500 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:23:24.776 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UL0368ELSH 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UL0368ELSH 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3021218 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3021218 /var/tmp/bdevperf.sock 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3021218 ']' 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.062 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.321 [2024-10-13 19:53:14.873428] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:25.321 [2024-10-13 19:53:14.873572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021218 ] 00:23:25.321 [2024-10-13 19:53:14.998619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.321 [2024-10-13 19:53:15.121623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.254 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.254 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:26.254 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:23:26.512 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.770 [2024-10-13 19:53:16.388038] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.770 TLSTESTn1 00:23:26.770 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:27.027 Running I/O for 10 seconds... 00:23:28.893 2672.00 IOPS, 10.44 MiB/s [2024-10-13T17:53:19.643Z] 2689.50 IOPS, 10.51 MiB/s [2024-10-13T17:53:21.016Z] 2695.67 IOPS, 10.53 MiB/s [2024-10-13T17:53:21.948Z] 2696.00 IOPS, 10.53 MiB/s [2024-10-13T17:53:22.879Z] 2703.00 IOPS, 10.56 MiB/s [2024-10-13T17:53:23.812Z] 2705.33 IOPS, 10.57 MiB/s [2024-10-13T17:53:24.745Z] 2709.86 IOPS, 10.59 MiB/s [2024-10-13T17:53:25.677Z] 2713.25 IOPS, 10.60 MiB/s [2024-10-13T17:53:27.049Z] 2713.89 IOPS, 10.60 MiB/s [2024-10-13T17:53:27.049Z] 2715.50 IOPS, 10.61 MiB/s 00:23:37.234 Latency(us) 00:23:37.234 [2024-10-13T17:53:27.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.234 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:37.234 Verification LBA range: start 0x0 length 0x2000 00:23:37.234 TLSTESTn1 : 10.04 2718.32 10.62 0.00 0.00 46984.78 7767.23 42719.76 00:23:37.234 [2024-10-13T17:53:27.049Z] =================================================================================================================== 00:23:37.234 [2024-10-13T17:53:27.049Z] Total : 2718.32 10.62 0.00 0.00 46984.78 7767.23 42719.76 00:23:37.234 { 00:23:37.234 "results": [ 00:23:37.234 { 00:23:37.234 "job": "TLSTESTn1", 00:23:37.234 "core_mask": "0x4", 00:23:37.235 "workload": "verify", 00:23:37.235 "status": "finished", 00:23:37.235 "verify_range": { 00:23:37.235 "start": 0, 00:23:37.235 "length": 8192 00:23:37.235 }, 00:23:37.235 "queue_depth": 128, 00:23:37.235 "io_size": 4096, 00:23:37.235 "runtime": 10.03562, 00:23:37.235 "iops": 2718.317353586525, 00:23:37.235 "mibps": 10.618427162447363, 00:23:37.235 "io_failed": 0, 00:23:37.235 "io_timeout": 0, 00:23:37.235 "avg_latency_us": 46984.77779428696, 00:23:37.235 "min_latency_us": 7767.22962962963, 00:23:37.235 "max_latency_us": 42719.76296296297 00:23:37.235 } 00:23:37.235 ], 00:23:37.235 "core_count": 1 00:23:37.235 } 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3021218 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3021218 ']' 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3021218 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3021218 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3021218' 00:23:37.235 killing process with pid 3021218 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3021218 00:23:37.235 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.235 00:23:37.235 Latency(us) 00:23:37.235 [2024-10-13T17:53:27.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.235 [2024-10-13T17:53:27.050Z] =================================================================================================================== 00:23:37.235 [2024-10-13T17:53:27.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.235 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3021218 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.UL0368ELSH 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UL0368ELSH 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UL0368ELSH 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UL0368ELSH 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UL0368ELSH 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3022670 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3022670 /var/tmp/bdevperf.sock 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3022670 ']' 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.801 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.059 [2024-10-13 19:53:27.648172] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:38.059 [2024-10-13 19:53:27.648330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022670 ] 00:23:38.059 [2024-10-13 19:53:27.779766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.320 [2024-10-13 19:53:27.903458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.884 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.884 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:38.884 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:23:39.142 [2024-10-13 19:53:28.925758] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UL0368ELSH': 0100666 00:23:39.142 [2024-10-13 19:53:28.925811] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:39.142 request: 00:23:39.142 { 00:23:39.142 "name": "key0", 00:23:39.142 "path": "/tmp/tmp.UL0368ELSH", 00:23:39.142 "method": "keyring_file_add_key", 00:23:39.142 "req_id": 1 00:23:39.142 } 00:23:39.142 Got JSON-RPC error response 00:23:39.142 response: 00:23:39.142 { 00:23:39.142 "code": -1, 00:23:39.142 "message": "Operation not permitted" 00:23:39.142 } 00:23:39.142 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.399 [2024-10-13 19:53:29.198620] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.399 [2024-10-13 19:53:29.198712] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:39.399 request: 00:23:39.399 { 00:23:39.399 "name": "TLSTEST", 00:23:39.399 "trtype": "tcp", 00:23:39.399 "traddr": "10.0.0.2", 00:23:39.399 "adrfam": "ipv4", 00:23:39.399 "trsvcid": "4420", 00:23:39.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.399 "prchk_reftag": false, 00:23:39.399 "prchk_guard": false, 00:23:39.399 "hdgst": false, 00:23:39.399 "ddgst": false, 00:23:39.399 "psk": "key0", 00:23:39.399 "allow_unrecognized_csi": false, 00:23:39.399 "method": "bdev_nvme_attach_controller", 00:23:39.399 "req_id": 1 00:23:39.399 } 00:23:39.399 Got JSON-RPC error response 00:23:39.399 response: 00:23:39.399 { 00:23:39.399 "code": -126, 00:23:39.399 "message": "Required key not available" 00:23:39.399 } 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3022670 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3022670 ']' 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3022670 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3022670 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3022670' 00:23:39.658 killing process with pid 3022670 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3022670 00:23:39.658 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.658 00:23:39.658 Latency(us) 00:23:39.658 [2024-10-13T17:53:29.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.658 [2024-10-13T17:53:29.473Z] =================================================================================================================== 00:23:39.658 [2024-10-13T17:53:29.473Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.658 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3022670 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3020802 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3020802 ']' 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3020802 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.225 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3020802 00:23:40.482 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:40.482 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:40.482 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3020802' 00:23:40.482 killing process with pid 3020802 00:23:40.482 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3020802 00:23:40.482 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3020802 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3023092 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3023092 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3023092 ']' 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.858 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.858 [2024-10-13 19:53:31.404874] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:41.858 [2024-10-13 19:53:31.405051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.858 [2024-10-13 19:53:31.536301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.858 [2024-10-13 19:53:31.654375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.858 [2024-10-13 19:53:31.654485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.858 [2024-10-13 19:53:31.654507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.858 [2024-10-13 19:53:31.654534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.858 [2024-10-13 19:53:31.654552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.858 [2024-10-13 19:53:31.656004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.UL0368ELSH 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UL0368ELSH 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.UL0368ELSH 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UL0368ELSH 00:23:42.793 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.051 [2024-10-13 19:53:32.644779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.051 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:43.309 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:43.567 [2024-10-13 19:53:33.198431] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.567 [2024-10-13 19:53:33.198809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.567 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:43.824 malloc0 00:23:43.824 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:44.082 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:23:44.340 [2024-10-13 19:53:34.034571] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UL0368ELSH': 0100666 00:23:44.340 [2024-10-13 19:53:34.034644] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:44.340 request: 00:23:44.340 { 00:23:44.340 "name": "key0", 00:23:44.340 "path": "/tmp/tmp.UL0368ELSH", 00:23:44.340 "method": "keyring_file_add_key", 00:23:44.340 "req_id": 1 00:23:44.340 } 00:23:44.340 Got JSON-RPC error response 00:23:44.340 response: 00:23:44.340 { 00:23:44.340 "code": -1, 00:23:44.340 "message": "Operation not permitted" 00:23:44.340 } 00:23:44.340 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.598 [2024-10-13 19:53:34.311420] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:44.598 [2024-10-13 19:53:34.311511] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:44.598 request: 00:23:44.598 { 00:23:44.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.598 "host": "nqn.2016-06.io.spdk:host1", 00:23:44.598 "psk": "key0", 00:23:44.598 "method": "nvmf_subsystem_add_host", 00:23:44.598 "req_id": 1 00:23:44.598 } 00:23:44.598 Got JSON-RPC error response 00:23:44.598 response: 00:23:44.598 { 00:23:44.598 "code": -32603, 00:23:44.598 "message": "Internal error" 00:23:44.598 } 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3023092 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3023092 ']' 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3023092 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3023092 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3023092' 00:23:44.598 killing process with pid 3023092 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3023092 00:23:44.598 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3023092 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.UL0368ELSH 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3023644 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3023644 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3023644 ']' 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.973 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.973 [2024-10-13 19:53:35.601452] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:45.973 [2024-10-13 19:53:35.601610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.973 [2024-10-13 19:53:35.741010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.231 [2024-10-13 19:53:35.876549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.231 [2024-10-13 19:53:35.876654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.231 [2024-10-13 19:53:35.876680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.231 [2024-10-13 19:53:35.876705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.231 [2024-10-13 19:53:35.876725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.231 [2024-10-13 19:53:35.878426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.UL0368ELSH 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UL0368ELSH 00:23:46.798 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:47.056 [2024-10-13 19:53:36.838834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.056 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:47.622 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.622 [2024-10-13 19:53:37.432525] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.622 [2024-10-13 19:53:37.432905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.881 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:48.139 malloc0 00:23:48.139 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:48.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:23:48.655 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.221 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3024005 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3024005 /var/tmp/bdevperf.sock 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3024005 ']' 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.222 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.222 [2024-10-13 19:53:38.835954] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:49.222 [2024-10-13 19:53:38.836096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024005 ] 00:23:49.222 [2024-10-13 19:53:38.999743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.480 [2024-10-13 19:53:39.144800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.414 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.414 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:50.414 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:23:50.415 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.706 [2024-10-13 19:53:40.466175] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.989 TLSTESTn1 00:23:50.989 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:51.247 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:51.247 "subsystems": [ 00:23:51.247 { 00:23:51.247 "subsystem": "keyring", 00:23:51.247 "config": [ 00:23:51.247 { 00:23:51.247 "method": "keyring_file_add_key", 00:23:51.247 "params": { 00:23:51.247 "name": "key0", 00:23:51.247 "path": "/tmp/tmp.UL0368ELSH" 00:23:51.247 } 00:23:51.247 } 00:23:51.247 ] 00:23:51.247 }, 00:23:51.247 { 00:23:51.247 "subsystem": "iobuf", 00:23:51.247 "config": [ 00:23:51.247 { 00:23:51.247 "method": "iobuf_set_options", 00:23:51.247 "params": { 00:23:51.247 "small_pool_count": 8192, 00:23:51.247 "large_pool_count": 1024, 00:23:51.247 "small_bufsize": 8192, 00:23:51.247 "large_bufsize": 135168 00:23:51.247 } 00:23:51.247 } 00:23:51.247 ] 00:23:51.247 }, 00:23:51.247 { 00:23:51.247 "subsystem": "sock", 00:23:51.247 "config": [ 00:23:51.247 { 00:23:51.247 "method": "sock_set_default_impl", 00:23:51.247 "params": { 00:23:51.247 "impl_name": "posix" 00:23:51.247 } 00:23:51.247 }, 00:23:51.247 { 00:23:51.247 "method": "sock_impl_set_options", 00:23:51.247 "params": { 00:23:51.247 "impl_name": "ssl", 00:23:51.247 "recv_buf_size": 4096, 00:23:51.247 "send_buf_size": 4096, 00:23:51.247 "enable_recv_pipe": true, 00:23:51.247 "enable_quickack": false, 00:23:51.247 "enable_placement_id": 0, 00:23:51.247 "enable_zerocopy_send_server": true, 00:23:51.247 "enable_zerocopy_send_client": false, 00:23:51.247 "zerocopy_threshold": 0, 00:23:51.247 "tls_version": 0, 00:23:51.247 "enable_ktls": false 00:23:51.247 } 00:23:51.247 }, 00:23:51.247 { 00:23:51.247 "method": "sock_impl_set_options", 00:23:51.247 "params": { 00:23:51.247 "impl_name": "posix", 00:23:51.247 "recv_buf_size": 2097152, 00:23:51.248 "send_buf_size": 2097152, 00:23:51.248 "enable_recv_pipe": true, 00:23:51.248 "enable_quickack": false, 00:23:51.248 "enable_placement_id": 0, 00:23:51.248 "enable_zerocopy_send_server": true, 00:23:51.248 "enable_zerocopy_send_client": false, 00:23:51.248 "zerocopy_threshold": 0, 00:23:51.248 "tls_version": 0, 00:23:51.248 "enable_ktls": false 00:23:51.248 } 00:23:51.248 } 00:23:51.248 ] 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "subsystem": "vmd", 00:23:51.248 "config": [] 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "subsystem": "accel", 00:23:51.248 "config": [ 00:23:51.248 { 00:23:51.248 "method": "accel_set_options", 00:23:51.248 "params": { 00:23:51.248 "small_cache_size": 128, 00:23:51.248 "large_cache_size": 16, 00:23:51.248 "task_count": 2048, 00:23:51.248 "sequence_count": 2048, 00:23:51.248 "buf_count": 2048 00:23:51.248 } 00:23:51.248 } 00:23:51.248 ] 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "subsystem": "bdev", 00:23:51.248 "config": [ 00:23:51.248 { 00:23:51.248 "method": "bdev_set_options", 00:23:51.248 "params": { 00:23:51.248 "bdev_io_pool_size": 65535, 00:23:51.248 "bdev_io_cache_size": 256, 00:23:51.248 "bdev_auto_examine": true, 00:23:51.248 "iobuf_small_cache_size": 128, 00:23:51.248 "iobuf_large_cache_size": 16 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "bdev_raid_set_options", 00:23:51.248 "params": { 00:23:51.248 "process_window_size_kb": 1024, 00:23:51.248 "process_max_bandwidth_mb_sec": 0 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "bdev_iscsi_set_options", 00:23:51.248 "params": { 00:23:51.248 "timeout_sec": 30 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "bdev_nvme_set_options", 00:23:51.248 "params": { 00:23:51.248 "action_on_timeout": "none", 00:23:51.248 "timeout_us": 0, 00:23:51.248 "timeout_admin_us": 0, 00:23:51.248 "keep_alive_timeout_ms": 10000, 00:23:51.248 "arbitration_burst": 0, 00:23:51.248 "low_priority_weight": 0, 00:23:51.248 "medium_priority_weight": 0, 00:23:51.248 "high_priority_weight": 0, 00:23:51.248 "nvme_adminq_poll_period_us": 10000, 00:23:51.248 "nvme_ioq_poll_period_us": 0, 00:23:51.248 "io_queue_requests": 0, 00:23:51.248 "delay_cmd_submit": true, 00:23:51.248 "transport_retry_count": 4, 00:23:51.248 "bdev_retry_count": 3, 00:23:51.248 "transport_ack_timeout": 0, 00:23:51.248 "ctrlr_loss_timeout_sec": 0, 00:23:51.248 "reconnect_delay_sec": 0, 00:23:51.248 "fast_io_fail_timeout_sec": 0, 00:23:51.248 "disable_auto_failback": false, 00:23:51.248 "generate_uuids": false, 00:23:51.248 "transport_tos": 0, 00:23:51.248 "nvme_error_stat": false, 00:23:51.248 "rdma_srq_size": 0, 00:23:51.248 "io_path_stat": false, 00:23:51.248 "allow_accel_sequence": false, 00:23:51.248 "rdma_max_cq_size": 0, 00:23:51.248 "rdma_cm_event_timeout_ms": 0, 00:23:51.248 "dhchap_digests": [ 00:23:51.248 "sha256", 00:23:51.248 "sha384", 00:23:51.248 "sha512" 00:23:51.248 ], 00:23:51.248 "dhchap_dhgroups": [ 00:23:51.248 "null", 00:23:51.248 "ffdhe2048", 00:23:51.248 "ffdhe3072", 00:23:51.248 "ffdhe4096", 00:23:51.248 "ffdhe6144", 00:23:51.248 "ffdhe8192" 00:23:51.248 ] 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "bdev_nvme_set_hotplug", 00:23:51.248 "params": { 00:23:51.248 "period_us": 100000, 00:23:51.248 "enable": false 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "bdev_malloc_create", 00:23:51.248 "params": { 00:23:51.248 "name": "malloc0", 00:23:51.248 "num_blocks": 8192, 00:23:51.248 "block_size": 4096, 00:23:51.248 "physical_block_size": 4096, 00:23:51.248 "uuid": "8d234b39-6db1-4d10-beb4-a8108905d75c", 00:23:51.248 "optimal_io_boundary": 0, 00:23:51.248 "md_size": 0, 00:23:51.248 "dif_type": 0, 00:23:51.248 "dif_is_head_of_md": false, 00:23:51.248 "dif_pi_format": 0 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "bdev_wait_for_examine" 00:23:51.248 } 00:23:51.248 ] 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "subsystem": "nbd", 00:23:51.248 "config": [] 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "subsystem": "scheduler", 00:23:51.248 "config": [ 00:23:51.248 { 00:23:51.248 "method": "framework_set_scheduler", 00:23:51.248 "params": { 00:23:51.248 "name": "static" 00:23:51.248 } 00:23:51.248 } 00:23:51.248 ] 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "subsystem": "nvmf", 00:23:51.248 "config": [ 00:23:51.248 { 00:23:51.248 "method": "nvmf_set_config", 00:23:51.248 "params": { 00:23:51.248 "discovery_filter": "match_any", 00:23:51.248 "admin_cmd_passthru": { 00:23:51.248 "identify_ctrlr": false 00:23:51.248 }, 00:23:51.248 "dhchap_digests": [ 00:23:51.248 "sha256", 00:23:51.248 "sha384", 00:23:51.248 "sha512" 00:23:51.248 ], 00:23:51.248 "dhchap_dhgroups": [ 00:23:51.248 "null", 00:23:51.248 "ffdhe2048", 00:23:51.248 "ffdhe3072", 00:23:51.248 "ffdhe4096", 00:23:51.248 "ffdhe6144", 00:23:51.248 "ffdhe8192" 00:23:51.248 ] 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "nvmf_set_max_subsystems", 00:23:51.248 "params": { 00:23:51.248 "max_subsystems": 1024 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "nvmf_set_crdt", 00:23:51.248 "params": { 00:23:51.248 "crdt1": 0, 00:23:51.248 "crdt2": 0, 00:23:51.248 "crdt3": 0 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "nvmf_create_transport", 00:23:51.248 "params": { 00:23:51.248 "trtype": "TCP", 00:23:51.248 "max_queue_depth": 128, 00:23:51.248 "max_io_qpairs_per_ctrlr": 127, 00:23:51.248 "in_capsule_data_size": 4096, 00:23:51.248 "max_io_size": 131072, 00:23:51.248 "io_unit_size": 131072, 00:23:51.248 "max_aq_depth": 128, 00:23:51.248 "num_shared_buffers": 511, 00:23:51.248 "buf_cache_size": 4294967295, 00:23:51.248 "dif_insert_or_strip": false, 00:23:51.248 "zcopy": false, 00:23:51.248 "c2h_success": false, 00:23:51.248 "sock_priority": 0, 00:23:51.248 "abort_timeout_sec": 1, 00:23:51.248 "ack_timeout": 0, 00:23:51.248 "data_wr_pool_size": 0 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "nvmf_create_subsystem", 00:23:51.248 "params": { 00:23:51.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.248 "allow_any_host": false, 00:23:51.248 "serial_number": "SPDK00000000000001", 00:23:51.248 "model_number": "SPDK bdev Controller", 00:23:51.248 "max_namespaces": 10, 00:23:51.248 "min_cntlid": 1, 00:23:51.248 "max_cntlid": 65519, 00:23:51.248 "ana_reporting": false 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "nvmf_subsystem_add_host", 00:23:51.248 "params": { 00:23:51.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.248 "host": "nqn.2016-06.io.spdk:host1", 00:23:51.248 "psk": "key0" 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "nvmf_subsystem_add_ns", 00:23:51.248 "params": { 00:23:51.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.248 "namespace": { 00:23:51.248 "nsid": 1, 00:23:51.248 "bdev_name": "malloc0", 00:23:51.248 "nguid": "8D234B396DB14D10BEB4A8108905D75C", 00:23:51.248 "uuid": "8d234b39-6db1-4d10-beb4-a8108905d75c", 00:23:51.248 "no_auto_visible": false 00:23:51.248 } 00:23:51.248 } 00:23:51.248 }, 00:23:51.248 { 00:23:51.248 "method": "nvmf_subsystem_add_listener", 00:23:51.248 "params": { 00:23:51.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.248 "listen_address": { 00:23:51.248 "trtype": "TCP", 00:23:51.248 "adrfam": "IPv4", 00:23:51.248 "traddr": "10.0.0.2", 00:23:51.248 "trsvcid": "4420" 00:23:51.248 }, 00:23:51.248 "secure_channel": true 00:23:51.248 } 00:23:51.248 } 00:23:51.248 ] 00:23:51.248 } 00:23:51.248 ] 00:23:51.248 }' 00:23:51.249 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:51.506 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:51.506 "subsystems": [ 00:23:51.506 { 00:23:51.506 "subsystem": "keyring", 00:23:51.506 "config": [ 00:23:51.506 { 00:23:51.506 "method": "keyring_file_add_key", 00:23:51.506 "params": { 00:23:51.506 "name": "key0", 00:23:51.506 "path": "/tmp/tmp.UL0368ELSH" 00:23:51.506 } 00:23:51.506 } 00:23:51.506 ] 00:23:51.506 }, 00:23:51.506 { 00:23:51.506 "subsystem": "iobuf", 00:23:51.506 "config": [ 00:23:51.506 { 00:23:51.506 "method": "iobuf_set_options", 00:23:51.506 "params": { 00:23:51.506 "small_pool_count": 8192, 00:23:51.506 "large_pool_count": 1024, 00:23:51.506 "small_bufsize": 8192, 00:23:51.506 "large_bufsize": 135168 00:23:51.506 } 00:23:51.506 } 00:23:51.506 ] 00:23:51.506 }, 00:23:51.506 { 00:23:51.506 "subsystem": "sock", 00:23:51.506 "config": [ 00:23:51.506 { 00:23:51.506 "method": "sock_set_default_impl", 00:23:51.506 "params": { 00:23:51.506 "impl_name": "posix" 00:23:51.506 } 00:23:51.506 }, 00:23:51.506 { 00:23:51.506 "method": "sock_impl_set_options", 00:23:51.506 "params": { 00:23:51.506 "impl_name": "ssl", 00:23:51.506 "recv_buf_size": 4096, 00:23:51.506 "send_buf_size": 4096, 00:23:51.506 "enable_recv_pipe": true, 00:23:51.506 "enable_quickack": false, 00:23:51.506 "enable_placement_id": 0, 00:23:51.507 "enable_zerocopy_send_server": true, 00:23:51.507 "enable_zerocopy_send_client": false, 00:23:51.507 "zerocopy_threshold": 0, 00:23:51.507 "tls_version": 0, 00:23:51.507 "enable_ktls": false 00:23:51.507 } 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "method": "sock_impl_set_options", 00:23:51.507 "params": { 00:23:51.507 "impl_name": "posix", 00:23:51.507 "recv_buf_size": 2097152, 00:23:51.507 "send_buf_size": 2097152, 00:23:51.507 "enable_recv_pipe": true, 00:23:51.507 "enable_quickack": false, 00:23:51.507 "enable_placement_id": 0, 00:23:51.507 "enable_zerocopy_send_server": true, 00:23:51.507 "enable_zerocopy_send_client": false, 00:23:51.507 "zerocopy_threshold": 0, 00:23:51.507 "tls_version": 0, 00:23:51.507 "enable_ktls": false 00:23:51.507 } 00:23:51.507 } 00:23:51.507 ] 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "subsystem": "vmd", 00:23:51.507 "config": [] 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "subsystem": "accel", 00:23:51.507 "config": [ 00:23:51.507 { 00:23:51.507 "method": "accel_set_options", 00:23:51.507 "params": { 00:23:51.507 "small_cache_size": 128, 00:23:51.507 "large_cache_size": 16, 00:23:51.507 "task_count": 2048, 00:23:51.507 "sequence_count": 2048, 00:23:51.507 "buf_count": 2048 00:23:51.507 } 00:23:51.507 } 00:23:51.507 ] 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "subsystem": "bdev", 00:23:51.507 "config": [ 00:23:51.507 { 00:23:51.507 "method": "bdev_set_options", 00:23:51.507 "params": { 00:23:51.507 "bdev_io_pool_size": 65535, 00:23:51.507 "bdev_io_cache_size": 256, 00:23:51.507 "bdev_auto_examine": true, 00:23:51.507 "iobuf_small_cache_size": 128, 00:23:51.507 "iobuf_large_cache_size": 16 00:23:51.507 } 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "method": "bdev_raid_set_options", 00:23:51.507 "params": { 00:23:51.507 "process_window_size_kb": 1024, 00:23:51.507 "process_max_bandwidth_mb_sec": 0 00:23:51.507 } 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "method": "bdev_iscsi_set_options", 00:23:51.507 "params": { 00:23:51.507 "timeout_sec": 30 00:23:51.507 } 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "method": "bdev_nvme_set_options", 00:23:51.507 "params": { 00:23:51.507 "action_on_timeout": "none", 00:23:51.507 "timeout_us": 0, 00:23:51.507 "timeout_admin_us": 0, 00:23:51.507 "keep_alive_timeout_ms": 10000, 00:23:51.507 "arbitration_burst": 0, 00:23:51.507 "low_priority_weight": 0, 00:23:51.507 "medium_priority_weight": 0, 00:23:51.507 "high_priority_weight": 0, 00:23:51.507 "nvme_adminq_poll_period_us": 10000, 00:23:51.507 "nvme_ioq_poll_period_us": 0, 00:23:51.507 "io_queue_requests": 512, 00:23:51.507 "delay_cmd_submit": true, 00:23:51.507 "transport_retry_count": 4, 00:23:51.507 "bdev_retry_count": 3, 00:23:51.507 "transport_ack_timeout": 0, 00:23:51.507 "ctrlr_loss_timeout_sec": 0, 00:23:51.507 "reconnect_delay_sec": 0, 00:23:51.507 "fast_io_fail_timeout_sec": 0, 00:23:51.507 "disable_auto_failback": false, 00:23:51.507 "generate_uuids": false, 00:23:51.507 "transport_tos": 0, 00:23:51.507 "nvme_error_stat": false, 00:23:51.507 "rdma_srq_size": 0, 00:23:51.507 "io_path_stat": false, 00:23:51.507 "allow_accel_sequence": false, 00:23:51.507 "rdma_max_cq_size": 0, 00:23:51.507 "rdma_cm_event_timeout_ms": 0, 00:23:51.507 "dhchap_digests": [ 00:23:51.507 "sha256", 00:23:51.507 "sha384", 00:23:51.507 "sha512" 00:23:51.507 ], 00:23:51.507 "dhchap_dhgroups": [ 00:23:51.507 "null", 00:23:51.507 "ffdhe2048", 00:23:51.507 "ffdhe3072", 00:23:51.507 "ffdhe4096", 00:23:51.507 "ffdhe6144", 00:23:51.507 "ffdhe8192" 00:23:51.507 ] 00:23:51.507 } 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "method": "bdev_nvme_attach_controller", 00:23:51.507 "params": { 00:23:51.507 "name": "TLSTEST", 00:23:51.507 "trtype": "TCP", 00:23:51.507 "adrfam": "IPv4", 00:23:51.507 "traddr": "10.0.0.2", 00:23:51.507 "trsvcid": "4420", 00:23:51.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.507 "prchk_reftag": false, 00:23:51.507 "prchk_guard": false, 00:23:51.507 "ctrlr_loss_timeout_sec": 0, 00:23:51.507 "reconnect_delay_sec": 0, 00:23:51.507 "fast_io_fail_timeout_sec": 0, 00:23:51.507 "psk": "key0", 00:23:51.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.507 "hdgst": false, 00:23:51.507 "ddgst": false, 00:23:51.507 "multipath": "multipath" 00:23:51.507 } 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "method": "bdev_nvme_set_hotplug", 00:23:51.507 "params": { 00:23:51.507 "period_us": 100000, 00:23:51.507 "enable": false 00:23:51.507 } 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "method": "bdev_wait_for_examine" 00:23:51.507 } 00:23:51.507 ] 00:23:51.507 }, 00:23:51.507 { 00:23:51.507 "subsystem": "nbd", 00:23:51.507 "config": [] 00:23:51.507 } 00:23:51.507 ] 00:23:51.507 }' 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3024005 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3024005 ']' 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3024005 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3024005 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3024005' 00:23:51.507 killing process with pid 3024005 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3024005 00:23:51.507 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.507 00:23:51.507 Latency(us) 00:23:51.507 [2024-10-13T17:53:41.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.507 [2024-10-13T17:53:41.322Z] =================================================================================================================== 00:23:51.507 [2024-10-13T17:53:41.322Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.507 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3024005 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3023644 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3023644 ']' 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3023644 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3023644 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3023644' 00:23:52.442 killing process with pid 3023644 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3023644 00:23:52.442 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3023644 00:23:53.817 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:53.817 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:53.817 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:53.817 "subsystems": [ 00:23:53.817 { 00:23:53.817 "subsystem": "keyring", 00:23:53.817 "config": [ 00:23:53.817 { 00:23:53.817 "method": "keyring_file_add_key", 00:23:53.817 "params": { 00:23:53.817 "name": "key0", 00:23:53.817 "path": "/tmp/tmp.UL0368ELSH" 00:23:53.817 } 00:23:53.817 } 00:23:53.817 ] 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "subsystem": "iobuf", 00:23:53.817 "config": [ 00:23:53.817 { 00:23:53.817 "method": "iobuf_set_options", 00:23:53.817 "params": { 00:23:53.817 "small_pool_count": 8192, 00:23:53.817 "large_pool_count": 1024, 00:23:53.817 "small_bufsize": 8192, 00:23:53.817 "large_bufsize": 135168 00:23:53.817 } 00:23:53.817 } 00:23:53.817 ] 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "subsystem": "sock", 00:23:53.817 "config": [ 00:23:53.817 { 00:23:53.817 "method": "sock_set_default_impl", 00:23:53.817 "params": { 00:23:53.817 "impl_name": "posix" 00:23:53.817 } 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "method": "sock_impl_set_options", 00:23:53.817 "params": { 00:23:53.817 "impl_name": "ssl", 00:23:53.817 "recv_buf_size": 4096, 00:23:53.817 "send_buf_size": 4096, 00:23:53.817 "enable_recv_pipe": true, 00:23:53.817 "enable_quickack": false, 00:23:53.817 "enable_placement_id": 0, 00:23:53.817 "enable_zerocopy_send_server": true, 00:23:53.817 "enable_zerocopy_send_client": false, 00:23:53.817 "zerocopy_threshold": 0, 00:23:53.817 "tls_version": 0, 00:23:53.817 "enable_ktls": false 00:23:53.817 } 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "method": "sock_impl_set_options", 00:23:53.817 "params": { 00:23:53.817 "impl_name": "posix", 00:23:53.817 "recv_buf_size": 2097152, 00:23:53.817 "send_buf_size": 2097152, 00:23:53.817 "enable_recv_pipe": true, 00:23:53.817 "enable_quickack": false, 00:23:53.817 "enable_placement_id": 0, 00:23:53.817 "enable_zerocopy_send_server": true, 00:23:53.817 "enable_zerocopy_send_client": false, 00:23:53.817 "zerocopy_threshold": 0, 00:23:53.817 "tls_version": 0, 00:23:53.817 "enable_ktls": false 00:23:53.817 } 00:23:53.817 } 00:23:53.817 ] 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "subsystem": "vmd", 00:23:53.817 "config": [] 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "subsystem": "accel", 00:23:53.817 "config": [ 00:23:53.817 { 00:23:53.817 "method": "accel_set_options", 00:23:53.817 "params": { 00:23:53.817 "small_cache_size": 128, 00:23:53.817 "large_cache_size": 16, 00:23:53.817 "task_count": 2048, 00:23:53.817 "sequence_count": 2048, 00:23:53.817 "buf_count": 2048 00:23:53.817 } 00:23:53.817 } 00:23:53.817 ] 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "subsystem": "bdev", 00:23:53.817 "config": [ 00:23:53.817 { 00:23:53.817 "method": "bdev_set_options", 00:23:53.817 "params": { 00:23:53.817 "bdev_io_pool_size": 65535, 00:23:53.817 "bdev_io_cache_size": 256, 00:23:53.817 "bdev_auto_examine": true, 00:23:53.817 "iobuf_small_cache_size": 128, 00:23:53.817 "iobuf_large_cache_size": 16 00:23:53.817 } 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "method": "bdev_raid_set_options", 00:23:53.817 "params": { 00:23:53.817 "process_window_size_kb": 1024, 00:23:53.817 "process_max_bandwidth_mb_sec": 0 00:23:53.817 } 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "method": "bdev_iscsi_set_options", 00:23:53.817 "params": { 00:23:53.817 "timeout_sec": 30 00:23:53.817 } 00:23:53.817 }, 00:23:53.817 { 00:23:53.817 "method": "bdev_nvme_set_options", 00:23:53.818 "params": { 00:23:53.818 "action_on_timeout": "none", 00:23:53.818 "timeout_us": 0, 00:23:53.818 "timeout_admin_us": 0, 00:23:53.818 "keep_alive_timeout_ms": 10000, 00:23:53.818 "arbitration_burst": 0, 00:23:53.818 "low_priority_weight": 0, 00:23:53.818 "medium_priority_weight": 0, 00:23:53.818 "high_priority_weight": 0, 00:23:53.818 "nvme_adminq_poll_period_us": 10000, 00:23:53.818 "nvme_ioq_poll_period_us": 0, 00:23:53.818 "io_queue_requests": 0, 00:23:53.818 "delay_cmd_submit": true, 00:23:53.818 "transport_retry_count": 4, 00:23:53.818 "bdev_retry_count": 3, 00:23:53.818 "transport_ack_timeout": 0, 00:23:53.818 "ctrlr_loss_timeout_sec": 0, 00:23:53.818 "reconnect_delay_sec": 0, 00:23:53.818 "fast_io_fail_timeout_sec": 0, 00:23:53.818 "disable_auto_failback": false, 00:23:53.818 "generate_uuids": false, 00:23:53.818 "transport_tos": 0, 00:23:53.818 "nvme_error_stat": false, 00:23:53.818 "rdma_srq_size": 0, 00:23:53.818 "io_path_stat": false, 00:23:53.818 "allow_accel_sequence": false, 00:23:53.818 "rdma_max_cq_size": 0, 00:23:53.818 "rdma_cm_event_timeout_ms": 0, 00:23:53.818 "dhchap_digests": [ 00:23:53.818 "sha256", 00:23:53.818 "sha384", 00:23:53.818 "sha512" 00:23:53.818 ], 00:23:53.818 "dhchap_dhgroups": [ 00:23:53.818 "null", 00:23:53.818 "ffdhe2048", 00:23:53.818 "ffdhe3072", 00:23:53.818 "ffdhe4096", 00:23:53.818 "ffdhe6144", 00:23:53.818 "ffdhe8192" 00:23:53.818 ] 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "bdev_nvme_set_hotplug", 00:23:53.818 "params": { 00:23:53.818 "period_us": 100000, 00:23:53.818 "enable": false 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "bdev_malloc_create", 00:23:53.818 "params": { 00:23:53.818 "name": "malloc0", 00:23:53.818 "num_blocks": 8192, 00:23:53.818 "block_size": 4096, 00:23:53.818 "physical_block_size": 4096, 00:23:53.818 "uuid": "8d234b39-6db1-4d10-beb4-a8108905d75c", 00:23:53.818 "optimal_io_boundary": 0, 00:23:53.818 "md_size": 0, 00:23:53.818 "dif_type": 0, 00:23:53.818 "dif_is_head_of_md": false, 00:23:53.818 "dif_pi_format": 0 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "bdev_wait_for_examine" 00:23:53.818 } 00:23:53.818 ] 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "subsystem": "nbd", 00:23:53.818 "config": [] 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "subsystem": "scheduler", 00:23:53.818 "config": [ 00:23:53.818 { 00:23:53.818 "method": "framework_set_scheduler", 00:23:53.818 "params": { 00:23:53.818 "name": "static" 00:23:53.818 } 00:23:53.818 } 00:23:53.818 ] 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "subsystem": "nvmf", 00:23:53.818 "config": [ 00:23:53.818 { 00:23:53.818 "method": "nvmf_set_config", 00:23:53.818 "params": { 00:23:53.818 "discovery_filter": "match_any", 00:23:53.818 "admin_cmd_passthru": { 00:23:53.818 "identify_ctrlr": false 00:23:53.818 }, 00:23:53.818 "dhchap_digests": [ 00:23:53.818 "sha256", 00:23:53.818 "sha384", 00:23:53.818 "sha512" 00:23:53.818 ], 00:23:53.818 "dhchap_dhgroups": [ 00:23:53.818 "null", 00:23:53.818 "ffdhe2048", 00:23:53.818 "ffdhe3072", 00:23:53.818 "ffdhe4096", 00:23:53.818 "ffdhe6144", 00:23:53.818 "ffdhe8192" 00:23:53.818 ] 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "nvmf_set_max_subsystems", 00:23:53.818 "params": { 00:23:53.818 "max_subsystems": 1024 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "nvmf_set_crdt", 00:23:53.818 "params": { 00:23:53.818 "crdt1": 0, 00:23:53.818 "crdt2": 0, 00:23:53.818 "crdt3": 0 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "nvmf_create_transport", 00:23:53.818 "params": { 00:23:53.818 "trtype": "TCP", 00:23:53.818 "max_queue_depth": 128, 00:23:53.818 "max_io_qpairs_per_ctrlr": 127, 00:23:53.818 "in_capsule_data_size": 4096, 00:23:53.818 "max_io_size": 131072, 00:23:53.818 "io_unit_size": 131072, 00:23:53.818 "max_aq_depth": 128, 00:23:53.818 "num_shared_buffers": 511, 00:23:53.818 "buf_cache_size": 4294967295, 00:23:53.818 "dif_insert_or_strip": false, 00:23:53.818 "zcopy": false, 00:23:53.818 "c2h_success": false, 00:23:53.818 "sock_priority": 0, 00:23:53.818 "abort_timeout_sec": 1, 00:23:53.818 "ack_timeout": 0, 00:23:53.818 "data_wr_pool_size": 0 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "nvmf_create_subsystem", 00:23:53.818 "params": { 00:23:53.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.818 "allow_any_host": false, 00:23:53.818 "serial_number": "SPDK00000000000001", 00:23:53.818 "model_number": "SPDK bdev Controller", 00:23:53.818 "max_namespaces": 10, 00:23:53.818 "min_cntlid": 1, 00:23:53.818 "max_cntlid": 65519, 00:23:53.818 "ana_reporting": false 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "nvmf_subsystem_add_host", 00:23:53.818 "params": { 00:23:53.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.818 "host": "nqn.2016-06.io.spdk:host1", 00:23:53.818 "psk": "key0" 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "nvmf_subsystem_add_ns", 00:23:53.818 "params": { 00:23:53.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.818 "namespace": { 00:23:53.818 "nsid": 1, 00:23:53.818 "bdev_name": "malloc0", 00:23:53.818 "nguid": "8D234B396DB14D10BEB4A8108905D75C", 00:23:53.818 "uuid": "8d234b39-6db1-4d10-beb4-a8108905d75c", 00:23:53.818 "no_auto_visible": false 00:23:53.818 } 00:23:53.818 } 00:23:53.818 }, 00:23:53.818 { 00:23:53.818 "method": "nvmf_subsystem_add_listener", 00:23:53.818 "params": { 00:23:53.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.818 "listen_address": { 00:23:53.818 "trtype": "TCP", 00:23:53.818 "adrfam": "IPv4", 00:23:53.818 "traddr": "10.0.0.2", 00:23:53.818 "trsvcid": "4420" 00:23:53.818 }, 00:23:53.818 "secure_channel": true 00:23:53.818 } 00:23:53.818 } 00:23:53.818 ] 00:23:53.818 } 00:23:53.818 ] 00:23:53.818 }' 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3024594 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3024594 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3024594 ']' 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.818 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.818 [2024-10-13 19:53:43.418456] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:53.819 [2024-10-13 19:53:43.418630] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.819 [2024-10-13 19:53:43.558796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.077 [2024-10-13 19:53:43.694869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.077 [2024-10-13 19:53:43.694968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.077 [2024-10-13 19:53:43.694995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.077 [2024-10-13 19:53:43.695020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.077 [2024-10-13 19:53:43.695040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.077 [2024-10-13 19:53:43.696789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.644 [2024-10-13 19:53:44.240591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.644 [2024-10-13 19:53:44.272599] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.644 [2024-10-13 19:53:44.272969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3024669 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3024669 /var/tmp/bdevperf.sock 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3024669 ']' 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.644 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:54.644 "subsystems": [ 00:23:54.644 { 00:23:54.644 "subsystem": "keyring", 00:23:54.644 "config": [ 00:23:54.644 { 00:23:54.644 "method": "keyring_file_add_key", 00:23:54.644 "params": { 00:23:54.644 "name": "key0", 00:23:54.644 "path": "/tmp/tmp.UL0368ELSH" 00:23:54.644 } 00:23:54.644 } 00:23:54.644 ] 00:23:54.644 }, 00:23:54.644 { 00:23:54.644 "subsystem": "iobuf", 00:23:54.644 "config": [ 00:23:54.644 { 00:23:54.644 "method": "iobuf_set_options", 00:23:54.644 "params": { 00:23:54.644 "small_pool_count": 8192, 00:23:54.644 "large_pool_count": 1024, 00:23:54.644 "small_bufsize": 8192, 00:23:54.644 "large_bufsize": 135168 00:23:54.644 } 00:23:54.644 } 00:23:54.644 ] 00:23:54.644 }, 00:23:54.644 { 00:23:54.644 "subsystem": "sock", 00:23:54.644 "config": [ 00:23:54.644 { 00:23:54.644 "method": "sock_set_default_impl", 00:23:54.644 "params": { 00:23:54.644 "impl_name": "posix" 00:23:54.644 } 00:23:54.644 }, 00:23:54.644 { 00:23:54.644 "method": "sock_impl_set_options", 00:23:54.644 "params": { 00:23:54.644 "impl_name": "ssl", 00:23:54.644 "recv_buf_size": 4096, 00:23:54.644 "send_buf_size": 4096, 00:23:54.644 "enable_recv_pipe": true, 00:23:54.644 "enable_quickack": false, 00:23:54.644 "enable_placement_id": 0, 00:23:54.644 "enable_zerocopy_send_server": true, 00:23:54.644 "enable_zerocopy_send_client": false, 00:23:54.644 "zerocopy_threshold": 0, 00:23:54.644 "tls_version": 0, 00:23:54.644 "enable_ktls": false 00:23:54.644 } 00:23:54.644 }, 00:23:54.644 { 00:23:54.644 "method": "sock_impl_set_options", 00:23:54.644 "params": { 00:23:54.644 "impl_name": "posix", 00:23:54.644 "recv_buf_size": 2097152, 00:23:54.644 "send_buf_size": 2097152, 00:23:54.644 "enable_recv_pipe": true, 00:23:54.644 "enable_quickack": false, 00:23:54.644 "enable_placement_id": 0, 00:23:54.644 "enable_zerocopy_send_server": true, 00:23:54.644 "enable_zerocopy_send_client": false, 00:23:54.644 "zerocopy_threshold": 0, 00:23:54.644 "tls_version": 0, 00:23:54.644 "enable_ktls": false 00:23:54.644 } 00:23:54.644 } 00:23:54.644 ] 00:23:54.644 }, 00:23:54.644 { 00:23:54.644 "subsystem": "vmd", 00:23:54.644 "config": [] 00:23:54.644 }, 00:23:54.644 { 00:23:54.644 "subsystem": "accel", 00:23:54.644 "config": [ 00:23:54.644 { 00:23:54.644 "method": "accel_set_options", 00:23:54.644 "params": { 00:23:54.644 "small_cache_size": 128, 00:23:54.644 "large_cache_size": 16, 00:23:54.644 "task_count": 2048, 00:23:54.644 "sequence_count": 2048, 00:23:54.644 "buf_count": 2048 00:23:54.645 } 00:23:54.645 } 00:23:54.645 ] 00:23:54.645 }, 00:23:54.645 { 00:23:54.645 "subsystem": "bdev", 00:23:54.645 "config": [ 00:23:54.645 { 00:23:54.645 "method": "bdev_set_options", 00:23:54.645 "params": { 00:23:54.645 "bdev_io_pool_size": 65535, 00:23:54.645 "bdev_io_cache_size": 256, 00:23:54.645 "bdev_auto_examine": true, 00:23:54.645 "iobuf_small_cache_size": 128, 00:23:54.645 "iobuf_large_cache_size": 16 00:23:54.645 } 00:23:54.645 }, 00:23:54.645 { 00:23:54.645 "method": "bdev_raid_set_options", 00:23:54.645 "params": { 00:23:54.645 "process_window_size_kb": 1024, 00:23:54.645 "process_max_bandwidth_mb_sec": 0 00:23:54.645 } 00:23:54.645 }, 00:23:54.645 { 00:23:54.645 "method": "bdev_iscsi_set_options", 00:23:54.645 "params": { 00:23:54.645 "timeout_sec": 30 00:23:54.645 } 00:23:54.645 }, 00:23:54.645 { 00:23:54.645 "method": "bdev_nvme_set_options", 00:23:54.645 "params": { 00:23:54.645 "action_on_timeout": "none", 00:23:54.645 "timeout_us": 0, 00:23:54.645 "timeout_admin_us": 0, 00:23:54.645 "keep_alive_timeout_ms": 10000, 00:23:54.645 "arbitration_burst": 0, 00:23:54.645 "low_priority_weight": 0, 00:23:54.645 "medium_priority_weight": 0, 00:23:54.645 "high_priority_weight": 0, 00:23:54.645 "nvme_adminq_poll_period_us": 10000, 00:23:54.645 "nvme_ioq_poll_period_us": 0, 00:23:54.645 "io_queue_requests": 512, 00:23:54.645 "delay_cmd_submit": true, 00:23:54.645 "transport_retry_count": 4, 00:23:54.645 "bdev_retry_count": 3, 00:23:54.645 "transport_ack_timeout": 0, 00:23:54.645 "ctrlr_loss_timeout_sec": 0, 00:23:54.645 "reconnect_delay_sec": 0, 00:23:54.645 "fast_io_fail_timeout_sec": 0, 00:23:54.645 "disable_auto_failback": false, 00:23:54.645 "generate_uuids": false, 00:23:54.645 "transport_tos": 0, 00:23:54.645 "nvme_error_stat": false, 00:23:54.645 "rdma_srq_size": 0, 00:23:54.645 "io_path_stat": false, 00:23:54.645 "allow_accel_sequence": false, 00:23:54.645 "rdma_max_cq_size": 0, 00:23:54.645 "rdma_cm_event_timeout_ms": 0, 00:23:54.645 "dhchap_digests": [ 00:23:54.645 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.645 "sha256", 00:23:54.645 "sha384", 00:23:54.645 "sha512" 00:23:54.645 ], 00:23:54.645 "dhchap_dhgroups": [ 00:23:54.645 "null", 00:23:54.645 "ffdhe2048", 00:23:54.645 "ffdhe3072", 00:23:54.645 "ffdhe4096", 00:23:54.645 "ffdhe6144", 00:23:54.645 "ffdhe8192" 00:23:54.645 ] 00:23:54.645 } 00:23:54.645 }, 00:23:54.645 { 00:23:54.645 "method": "bdev_nvme_attach_controller", 00:23:54.645 "params": { 00:23:54.645 "name": "TLSTEST", 00:23:54.645 "trtype": "TCP", 00:23:54.645 "adrfam": "IPv4", 00:23:54.645 "traddr": "10.0.0.2", 00:23:54.645 "trsvcid": "4420", 00:23:54.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.645 "prchk_reftag": false, 00:23:54.645 "prchk_guard": false, 00:23:54.645 "ctrlr_loss_timeout_sec": 0, 00:23:54.645 "reconnect_delay_sec": 0, 00:23:54.645 "fast_io_fail_timeout_sec": 0, 00:23:54.645 "psk": "key0", 00:23:54.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.645 "hdgst": false, 00:23:54.645 "ddgst": false, 00:23:54.645 "multipath": "multipath" 00:23:54.645 } 00:23:54.645 }, 00:23:54.645 { 00:23:54.645 "method": "bdev_nvme_set_hotplug", 00:23:54.645 "params": { 00:23:54.645 "period_us": 100000, 00:23:54.645 "enable": false 00:23:54.645 } 00:23:54.645 }, 00:23:54.645 { 00:23:54.645 "method": "bdev_wait_for_examine" 00:23:54.645 } 00:23:54.645 ] 00:23:54.645 }, 00:23:54.645 { 00:23:54.645 "subsystem": "nbd", 00:23:54.645 "config": [] 00:23:54.645 } 00:23:54.645 ] 00:23:54.645 }' 00:23:54.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.645 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.645 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.903 [2024-10-13 19:53:44.478642] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:54.903 [2024-10-13 19:53:44.478799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024669 ] 00:23:54.903 [2024-10-13 19:53:44.601935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.161 [2024-10-13 19:53:44.720850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.419 [2024-10-13 19:53:45.122496] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.677 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.677 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.677 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:55.934 Running I/O for 10 seconds... 00:23:57.800 2391.00 IOPS, 9.34 MiB/s [2024-10-13T17:53:48.546Z] 2454.00 IOPS, 9.59 MiB/s [2024-10-13T17:53:49.918Z] 2458.00 IOPS, 9.60 MiB/s [2024-10-13T17:53:50.852Z] 2471.00 IOPS, 9.65 MiB/s [2024-10-13T17:53:51.785Z] 2475.80 IOPS, 9.67 MiB/s [2024-10-13T17:53:52.718Z] 2481.17 IOPS, 9.69 MiB/s [2024-10-13T17:53:53.652Z] 2484.57 IOPS, 9.71 MiB/s [2024-10-13T17:53:54.586Z] 2488.38 IOPS, 9.72 MiB/s [2024-10-13T17:53:55.959Z] 2491.56 IOPS, 9.73 MiB/s [2024-10-13T17:53:55.959Z] 2491.60 IOPS, 9.73 MiB/s 00:24:06.144 Latency(us) 00:24:06.144 [2024-10-13T17:53:55.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.144 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.144 Verification LBA range: start 0x0 length 0x2000 00:24:06.144 TLSTESTn1 : 10.02 2498.53 9.76 0.00 0.00 51139.04 8835.22 49321.91 00:24:06.144 [2024-10-13T17:53:55.959Z] =================================================================================================================== 00:24:06.144 [2024-10-13T17:53:55.959Z] Total : 2498.53 9.76 0.00 0.00 51139.04 8835.22 49321.91 00:24:06.144 { 00:24:06.144 "results": [ 00:24:06.144 { 00:24:06.144 "job": "TLSTESTn1", 00:24:06.144 "core_mask": "0x4", 00:24:06.144 "workload": "verify", 00:24:06.144 "status": "finished", 00:24:06.144 "verify_range": { 00:24:06.144 "start": 0, 00:24:06.144 "length": 8192 00:24:06.144 }, 00:24:06.144 "queue_depth": 128, 00:24:06.144 "io_size": 4096, 00:24:06.144 "runtime": 10.023075, 00:24:06.144 "iops": 2498.534631338187, 00:24:06.144 "mibps": 9.759900903664793, 00:24:06.144 "io_failed": 0, 00:24:06.144 "io_timeout": 0, 00:24:06.144 "avg_latency_us": 51139.04168030987, 00:24:06.144 "min_latency_us": 8835.223703703703, 00:24:06.144 "max_latency_us": 49321.90814814815 00:24:06.144 } 00:24:06.144 ], 00:24:06.144 "core_count": 1 00:24:06.144 } 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3024669 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3024669 ']' 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3024669 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3024669 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3024669' 00:24:06.144 killing process with pid 3024669 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3024669 00:24:06.144 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.144 00:24:06.144 Latency(us) 00:24:06.144 [2024-10-13T17:53:55.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.144 [2024-10-13T17:53:55.959Z] =================================================================================================================== 00:24:06.144 [2024-10-13T17:53:55.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.144 19:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3024669 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3024594 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3024594 ']' 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3024594 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3024594 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3024594' 00:24:06.710 killing process with pid 3024594 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3024594 00:24:06.710 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3024594 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3026230 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3026230 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3026230 ']' 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.085 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.085 [2024-10-13 19:53:57.880311] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:08.085 [2024-10-13 19:53:57.880476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.343 [2024-10-13 19:53:58.029095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.601 [2024-10-13 19:53:58.166537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.601 [2024-10-13 19:53:58.166623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.601 [2024-10-13 19:53:58.166657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.601 [2024-10-13 19:53:58.166681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.601 [2024-10-13 19:53:58.166702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.601 [2024-10-13 19:53:58.168339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.UL0368ELSH 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UL0368ELSH 00:24:09.166 19:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:09.425 [2024-10-13 19:53:59.080757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.425 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:09.682 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:09.940 [2024-10-13 19:53:59.622312] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.940 [2024-10-13 19:53:59.622735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.940 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:10.205 malloc0 00:24:10.205 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:10.770 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:24:11.028 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3026699 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3026699 /var/tmp/bdevperf.sock 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3026699 ']' 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.287 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.287 [2024-10-13 19:54:00.953543] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:11.287 [2024-10-13 19:54:00.953703] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026699 ] 00:24:11.287 [2024-10-13 19:54:01.090984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.545 [2024-10-13 19:54:01.229874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.110 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.110 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:12.367 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:24:12.625 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:12.883 [2024-10-13 19:54:02.461206] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.883 nvme0n1 00:24:12.883 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.883 Running I/O for 1 seconds... 00:24:14.257 2508.00 IOPS, 9.80 MiB/s 00:24:14.257 Latency(us) 00:24:14.257 [2024-10-13T17:54:04.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.257 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:14.257 Verification LBA range: start 0x0 length 0x2000 00:24:14.257 nvme0n1 : 1.03 2563.50 10.01 0.00 0.00 49338.53 8738.13 39807.05 00:24:14.257 [2024-10-13T17:54:04.072Z] =================================================================================================================== 00:24:14.257 [2024-10-13T17:54:04.072Z] Total : 2563.50 10.01 0.00 0.00 49338.53 8738.13 39807.05 00:24:14.257 { 00:24:14.257 "results": [ 00:24:14.257 { 00:24:14.257 "job": "nvme0n1", 00:24:14.257 "core_mask": "0x2", 00:24:14.257 "workload": "verify", 00:24:14.257 "status": "finished", 00:24:14.257 "verify_range": { 00:24:14.257 "start": 0, 00:24:14.257 "length": 8192 00:24:14.257 }, 00:24:14.257 "queue_depth": 128, 00:24:14.257 "io_size": 4096, 00:24:14.257 "runtime": 1.028283, 00:24:14.257 "iops": 2563.4966249563595, 00:24:14.257 "mibps": 10.01365869123578, 00:24:14.257 "io_failed": 0, 00:24:14.257 "io_timeout": 0, 00:24:14.257 "avg_latency_us": 49338.528796717816, 00:24:14.257 "min_latency_us": 8738.133333333333, 00:24:14.257 "max_latency_us": 39807.05185185185 00:24:14.257 } 00:24:14.257 ], 00:24:14.257 "core_count": 1 00:24:14.257 } 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3026699 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3026699 ']' 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3026699 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026699 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026699' 00:24:14.258 killing process with pid 3026699 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3026699 00:24:14.258 Received shutdown signal, test time was about 1.000000 seconds 00:24:14.258 00:24:14.258 Latency(us) 00:24:14.258 [2024-10-13T17:54:04.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.258 [2024-10-13T17:54:04.073Z] =================================================================================================================== 00:24:14.258 [2024-10-13T17:54:04.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.258 19:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3026699 00:24:14.824 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3026230 00:24:14.824 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3026230 ']' 00:24:14.824 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3026230 00:24:15.083 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:15.083 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.083 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026230 00:24:15.083 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:15.083 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:15.083 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026230' 00:24:15.083 killing process with pid 3026230 00:24:15.083 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3026230 00:24:15.083 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3026230 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3027311 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3027311 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3027311 ']' 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.456 19:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.456 [2024-10-13 19:54:06.033515] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:16.456 [2024-10-13 19:54:06.033679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.456 [2024-10-13 19:54:06.167063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.714 [2024-10-13 19:54:06.292350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.714 [2024-10-13 19:54:06.292457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.714 [2024-10-13 19:54:06.292482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.714 [2024-10-13 19:54:06.292503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.714 [2024-10-13 19:54:06.292520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.714 [2024-10-13 19:54:06.294068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.280 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 [2024-10-13 19:54:07.070119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.537 malloc0 00:24:17.537 [2024-10-13 19:54:07.131734] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.537 [2024-10-13 19:54:07.132122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3027461 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3027461 /var/tmp/bdevperf.sock 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3027461 ']' 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.537 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.537 [2024-10-13 19:54:07.244655] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:17.537 [2024-10-13 19:54:07.244814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027461 ] 00:24:17.795 [2024-10-13 19:54:07.384921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.795 [2024-10-13 19:54:07.523449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.728 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.728 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:18.728 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UL0368ELSH 00:24:18.985 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:19.243 [2024-10-13 19:54:08.815933] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.243 nvme0n1 00:24:19.243 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.243 Running I/O for 1 seconds... 00:24:20.619 2520.00 IOPS, 9.84 MiB/s 00:24:20.619 Latency(us) 00:24:20.619 [2024-10-13T17:54:10.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.619 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:20.619 Verification LBA range: start 0x0 length 0x2000 00:24:20.619 nvme0n1 : 1.03 2581.36 10.08 0.00 0.00 49014.40 8641.04 38059.43 00:24:20.619 [2024-10-13T17:54:10.434Z] =================================================================================================================== 00:24:20.619 [2024-10-13T17:54:10.434Z] Total : 2581.36 10.08 0.00 0.00 49014.40 8641.04 38059.43 00:24:20.619 { 00:24:20.619 "results": [ 00:24:20.619 { 00:24:20.619 "job": "nvme0n1", 00:24:20.619 "core_mask": "0x2", 00:24:20.619 "workload": "verify", 00:24:20.619 "status": "finished", 00:24:20.619 "verify_range": { 00:24:20.619 "start": 0, 00:24:20.619 "length": 8192 00:24:20.619 }, 00:24:20.619 "queue_depth": 128, 00:24:20.619 "io_size": 4096, 00:24:20.619 "runtime": 1.025816, 00:24:20.619 "iops": 2581.3596200488196, 00:24:20.619 "mibps": 10.083436015815701, 00:24:20.619 "io_failed": 0, 00:24:20.619 "io_timeout": 0, 00:24:20.619 "avg_latency_us": 49014.4033657827, 00:24:20.619 "min_latency_us": 8641.042962962963, 00:24:20.619 "max_latency_us": 38059.42518518519 00:24:20.619 } 00:24:20.619 ], 00:24:20.619 "core_count": 1 00:24:20.619 } 00:24:20.619 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:20.619 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.619 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.619 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.619 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:20.619 "subsystems": [ 00:24:20.619 { 00:24:20.619 "subsystem": "keyring", 00:24:20.619 "config": [ 00:24:20.619 { 00:24:20.619 "method": "keyring_file_add_key", 00:24:20.619 "params": { 00:24:20.619 "name": "key0", 00:24:20.619 "path": "/tmp/tmp.UL0368ELSH" 00:24:20.619 } 00:24:20.619 } 00:24:20.619 ] 00:24:20.619 }, 00:24:20.619 { 00:24:20.619 "subsystem": "iobuf", 00:24:20.619 "config": [ 00:24:20.619 { 00:24:20.619 "method": "iobuf_set_options", 00:24:20.619 "params": { 00:24:20.619 "small_pool_count": 8192, 00:24:20.619 "large_pool_count": 1024, 00:24:20.619 "small_bufsize": 8192, 00:24:20.619 "large_bufsize": 135168 00:24:20.619 } 00:24:20.619 } 00:24:20.619 ] 00:24:20.619 }, 00:24:20.619 { 00:24:20.619 "subsystem": "sock", 00:24:20.619 "config": [ 00:24:20.619 { 00:24:20.619 "method": "sock_set_default_impl", 00:24:20.619 "params": { 00:24:20.619 "impl_name": "posix" 00:24:20.619 } 00:24:20.619 }, 00:24:20.619 { 00:24:20.619 "method": "sock_impl_set_options", 00:24:20.619 "params": { 00:24:20.619 "impl_name": "ssl", 00:24:20.619 "recv_buf_size": 4096, 00:24:20.619 "send_buf_size": 4096, 00:24:20.619 "enable_recv_pipe": true, 00:24:20.619 "enable_quickack": false, 00:24:20.619 "enable_placement_id": 0, 00:24:20.619 "enable_zerocopy_send_server": true, 00:24:20.619 "enable_zerocopy_send_client": false, 00:24:20.619 "zerocopy_threshold": 0, 00:24:20.619 "tls_version": 0, 00:24:20.619 "enable_ktls": false 00:24:20.619 } 00:24:20.619 }, 00:24:20.619 { 00:24:20.619 "method": "sock_impl_set_options", 00:24:20.619 "params": { 00:24:20.619 "impl_name": "posix", 00:24:20.619 "recv_buf_size": 2097152, 00:24:20.619 "send_buf_size": 2097152, 00:24:20.619 "enable_recv_pipe": true, 00:24:20.619 "enable_quickack": false, 00:24:20.619 "enable_placement_id": 0, 00:24:20.619 "enable_zerocopy_send_server": true, 00:24:20.619 "enable_zerocopy_send_client": false, 00:24:20.619 "zerocopy_threshold": 0, 00:24:20.619 "tls_version": 0, 00:24:20.619 "enable_ktls": false 00:24:20.619 } 00:24:20.619 } 00:24:20.619 ] 00:24:20.619 }, 00:24:20.619 { 00:24:20.619 "subsystem": "vmd", 00:24:20.619 "config": [] 00:24:20.619 }, 00:24:20.619 { 00:24:20.619 "subsystem": "accel", 00:24:20.619 "config": [ 00:24:20.619 { 00:24:20.619 "method": "accel_set_options", 00:24:20.619 "params": { 00:24:20.619 "small_cache_size": 128, 00:24:20.619 "large_cache_size": 16, 00:24:20.619 "task_count": 2048, 00:24:20.619 "sequence_count": 2048, 00:24:20.619 "buf_count": 2048 00:24:20.619 } 00:24:20.619 } 00:24:20.619 ] 00:24:20.619 }, 00:24:20.619 { 00:24:20.619 "subsystem": "bdev", 00:24:20.619 "config": [ 00:24:20.619 { 00:24:20.619 "method": "bdev_set_options", 00:24:20.619 "params": { 00:24:20.619 "bdev_io_pool_size": 65535, 00:24:20.619 "bdev_io_cache_size": 256, 00:24:20.619 "bdev_auto_examine": true, 00:24:20.619 "iobuf_small_cache_size": 128, 00:24:20.619 "iobuf_large_cache_size": 16 00:24:20.619 } 00:24:20.619 }, 00:24:20.619 { 00:24:20.619 "method": "bdev_raid_set_options", 00:24:20.619 "params": { 00:24:20.619 "process_window_size_kb": 1024, 00:24:20.619 "process_max_bandwidth_mb_sec": 0 00:24:20.619 } 00:24:20.619 }, 00:24:20.619 { 00:24:20.620 "method": "bdev_iscsi_set_options", 00:24:20.620 "params": { 00:24:20.620 "timeout_sec": 30 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "bdev_nvme_set_options", 00:24:20.620 "params": { 00:24:20.620 "action_on_timeout": "none", 00:24:20.620 "timeout_us": 0, 00:24:20.620 "timeout_admin_us": 0, 00:24:20.620 "keep_alive_timeout_ms": 10000, 00:24:20.620 "arbitration_burst": 0, 00:24:20.620 "low_priority_weight": 0, 00:24:20.620 "medium_priority_weight": 0, 00:24:20.620 "high_priority_weight": 0, 00:24:20.620 "nvme_adminq_poll_period_us": 10000, 00:24:20.620 "nvme_ioq_poll_period_us": 0, 00:24:20.620 "io_queue_requests": 0, 00:24:20.620 "delay_cmd_submit": true, 00:24:20.620 "transport_retry_count": 4, 00:24:20.620 "bdev_retry_count": 3, 00:24:20.620 "transport_ack_timeout": 0, 00:24:20.620 "ctrlr_loss_timeout_sec": 0, 00:24:20.620 "reconnect_delay_sec": 0, 00:24:20.620 "fast_io_fail_timeout_sec": 0, 00:24:20.620 "disable_auto_failback": false, 00:24:20.620 "generate_uuids": false, 00:24:20.620 "transport_tos": 0, 00:24:20.620 "nvme_error_stat": false, 00:24:20.620 "rdma_srq_size": 0, 00:24:20.620 "io_path_stat": false, 00:24:20.620 "allow_accel_sequence": false, 00:24:20.620 "rdma_max_cq_size": 0, 00:24:20.620 "rdma_cm_event_timeout_ms": 0, 00:24:20.620 "dhchap_digests": [ 00:24:20.620 "sha256", 00:24:20.620 "sha384", 00:24:20.620 "sha512" 00:24:20.620 ], 00:24:20.620 "dhchap_dhgroups": [ 00:24:20.620 "null", 00:24:20.620 "ffdhe2048", 00:24:20.620 "ffdhe3072", 00:24:20.620 "ffdhe4096", 00:24:20.620 "ffdhe6144", 00:24:20.620 "ffdhe8192" 00:24:20.620 ] 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "bdev_nvme_set_hotplug", 00:24:20.620 "params": { 00:24:20.620 "period_us": 100000, 00:24:20.620 "enable": false 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "bdev_malloc_create", 00:24:20.620 "params": { 00:24:20.620 "name": "malloc0", 00:24:20.620 "num_blocks": 8192, 00:24:20.620 "block_size": 4096, 00:24:20.620 "physical_block_size": 4096, 00:24:20.620 "uuid": "4fe2fa26-4ae6-4e35-8f06-51235f8c81b3", 00:24:20.620 "optimal_io_boundary": 0, 00:24:20.620 "md_size": 0, 00:24:20.620 "dif_type": 0, 00:24:20.620 "dif_is_head_of_md": false, 00:24:20.620 "dif_pi_format": 0 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "bdev_wait_for_examine" 00:24:20.620 } 00:24:20.620 ] 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "subsystem": "nbd", 00:24:20.620 "config": [] 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "subsystem": "scheduler", 00:24:20.620 "config": [ 00:24:20.620 { 00:24:20.620 "method": "framework_set_scheduler", 00:24:20.620 "params": { 00:24:20.620 "name": "static" 00:24:20.620 } 00:24:20.620 } 00:24:20.620 ] 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "subsystem": "nvmf", 00:24:20.620 "config": [ 00:24:20.620 { 00:24:20.620 "method": "nvmf_set_config", 00:24:20.620 "params": { 00:24:20.620 "discovery_filter": "match_any", 00:24:20.620 "admin_cmd_passthru": { 00:24:20.620 "identify_ctrlr": false 00:24:20.620 }, 00:24:20.620 "dhchap_digests": [ 00:24:20.620 "sha256", 00:24:20.620 "sha384", 00:24:20.620 "sha512" 00:24:20.620 ], 00:24:20.620 "dhchap_dhgroups": [ 00:24:20.620 "null", 00:24:20.620 "ffdhe2048", 00:24:20.620 "ffdhe3072", 00:24:20.620 "ffdhe4096", 00:24:20.620 "ffdhe6144", 00:24:20.620 "ffdhe8192" 00:24:20.620 ] 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "nvmf_set_max_subsystems", 00:24:20.620 "params": { 00:24:20.620 "max_subsystems": 1024 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "nvmf_set_crdt", 00:24:20.620 "params": { 00:24:20.620 "crdt1": 0, 00:24:20.620 "crdt2": 0, 00:24:20.620 "crdt3": 0 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "nvmf_create_transport", 00:24:20.620 "params": { 00:24:20.620 "trtype": "TCP", 00:24:20.620 "max_queue_depth": 128, 00:24:20.620 "max_io_qpairs_per_ctrlr": 127, 00:24:20.620 "in_capsule_data_size": 4096, 00:24:20.620 "max_io_size": 131072, 00:24:20.620 "io_unit_size": 131072, 00:24:20.620 "max_aq_depth": 128, 00:24:20.620 "num_shared_buffers": 511, 00:24:20.620 "buf_cache_size": 4294967295, 00:24:20.620 "dif_insert_or_strip": false, 00:24:20.620 "zcopy": false, 00:24:20.620 "c2h_success": false, 00:24:20.620 "sock_priority": 0, 00:24:20.620 "abort_timeout_sec": 1, 00:24:20.620 "ack_timeout": 0, 00:24:20.620 "data_wr_pool_size": 0 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "nvmf_create_subsystem", 00:24:20.620 "params": { 00:24:20.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.620 "allow_any_host": false, 00:24:20.620 "serial_number": "00000000000000000000", 00:24:20.620 "model_number": "SPDK bdev Controller", 00:24:20.620 "max_namespaces": 32, 00:24:20.620 "min_cntlid": 1, 00:24:20.620 "max_cntlid": 65519, 00:24:20.620 "ana_reporting": false 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "nvmf_subsystem_add_host", 00:24:20.620 "params": { 00:24:20.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.620 "host": "nqn.2016-06.io.spdk:host1", 00:24:20.620 "psk": "key0" 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "nvmf_subsystem_add_ns", 00:24:20.620 "params": { 00:24:20.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.620 "namespace": { 00:24:20.620 "nsid": 1, 00:24:20.620 "bdev_name": "malloc0", 00:24:20.620 "nguid": "4FE2FA264AE64E358F0651235F8C81B3", 00:24:20.620 "uuid": "4fe2fa26-4ae6-4e35-8f06-51235f8c81b3", 00:24:20.620 "no_auto_visible": false 00:24:20.620 } 00:24:20.620 } 00:24:20.620 }, 00:24:20.620 { 00:24:20.620 "method": "nvmf_subsystem_add_listener", 00:24:20.620 "params": { 00:24:20.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.620 "listen_address": { 00:24:20.620 "trtype": "TCP", 00:24:20.620 "adrfam": "IPv4", 00:24:20.620 "traddr": "10.0.0.2", 00:24:20.620 "trsvcid": "4420" 00:24:20.620 }, 00:24:20.620 "secure_channel": false, 00:24:20.620 "sock_impl": "ssl" 00:24:20.620 } 00:24:20.620 } 00:24:20.620 ] 00:24:20.620 } 00:24:20.620 ] 00:24:20.620 }' 00:24:20.620 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:20.879 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:20.879 "subsystems": [ 00:24:20.879 { 00:24:20.879 "subsystem": "keyring", 00:24:20.879 "config": [ 00:24:20.879 { 00:24:20.879 "method": "keyring_file_add_key", 00:24:20.879 "params": { 00:24:20.879 "name": "key0", 00:24:20.879 "path": "/tmp/tmp.UL0368ELSH" 00:24:20.879 } 00:24:20.879 } 00:24:20.879 ] 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "subsystem": "iobuf", 00:24:20.879 "config": [ 00:24:20.879 { 00:24:20.879 "method": "iobuf_set_options", 00:24:20.879 "params": { 00:24:20.879 "small_pool_count": 8192, 00:24:20.879 "large_pool_count": 1024, 00:24:20.879 "small_bufsize": 8192, 00:24:20.879 "large_bufsize": 135168 00:24:20.879 } 00:24:20.879 } 00:24:20.879 ] 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "subsystem": "sock", 00:24:20.879 "config": [ 00:24:20.879 { 00:24:20.879 "method": "sock_set_default_impl", 00:24:20.879 "params": { 00:24:20.879 "impl_name": "posix" 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "sock_impl_set_options", 00:24:20.879 "params": { 00:24:20.879 "impl_name": "ssl", 00:24:20.879 "recv_buf_size": 4096, 00:24:20.879 "send_buf_size": 4096, 00:24:20.879 "enable_recv_pipe": true, 00:24:20.879 "enable_quickack": false, 00:24:20.879 "enable_placement_id": 0, 00:24:20.879 "enable_zerocopy_send_server": true, 00:24:20.879 "enable_zerocopy_send_client": false, 00:24:20.879 "zerocopy_threshold": 0, 00:24:20.879 "tls_version": 0, 00:24:20.879 "enable_ktls": false 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "sock_impl_set_options", 00:24:20.879 "params": { 00:24:20.880 "impl_name": "posix", 00:24:20.880 "recv_buf_size": 2097152, 00:24:20.880 "send_buf_size": 2097152, 00:24:20.880 "enable_recv_pipe": true, 00:24:20.880 "enable_quickack": false, 00:24:20.880 "enable_placement_id": 0, 00:24:20.880 "enable_zerocopy_send_server": true, 00:24:20.880 "enable_zerocopy_send_client": false, 00:24:20.880 "zerocopy_threshold": 0, 00:24:20.880 "tls_version": 0, 00:24:20.880 "enable_ktls": false 00:24:20.880 } 00:24:20.880 } 00:24:20.880 ] 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "subsystem": "vmd", 00:24:20.880 "config": [] 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "subsystem": "accel", 00:24:20.880 "config": [ 00:24:20.880 { 00:24:20.880 "method": "accel_set_options", 00:24:20.880 "params": { 00:24:20.880 "small_cache_size": 128, 00:24:20.880 "large_cache_size": 16, 00:24:20.880 "task_count": 2048, 00:24:20.880 "sequence_count": 2048, 00:24:20.880 "buf_count": 2048 00:24:20.880 } 00:24:20.880 } 00:24:20.880 ] 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "subsystem": "bdev", 00:24:20.880 "config": [ 00:24:20.880 { 00:24:20.880 "method": "bdev_set_options", 00:24:20.880 "params": { 00:24:20.880 "bdev_io_pool_size": 65535, 00:24:20.880 "bdev_io_cache_size": 256, 00:24:20.880 "bdev_auto_examine": true, 00:24:20.880 "iobuf_small_cache_size": 128, 00:24:20.880 "iobuf_large_cache_size": 16 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_raid_set_options", 00:24:20.880 "params": { 00:24:20.880 "process_window_size_kb": 1024, 00:24:20.880 "process_max_bandwidth_mb_sec": 0 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_iscsi_set_options", 00:24:20.880 "params": { 00:24:20.880 "timeout_sec": 30 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_nvme_set_options", 00:24:20.880 "params": { 00:24:20.880 "action_on_timeout": "none", 00:24:20.880 "timeout_us": 0, 00:24:20.880 "timeout_admin_us": 0, 00:24:20.880 "keep_alive_timeout_ms": 10000, 00:24:20.880 "arbitration_burst": 0, 00:24:20.880 "low_priority_weight": 0, 00:24:20.880 "medium_priority_weight": 0, 00:24:20.880 "high_priority_weight": 0, 00:24:20.880 "nvme_adminq_poll_period_us": 10000, 00:24:20.880 "nvme_ioq_poll_period_us": 0, 00:24:20.880 "io_queue_requests": 512, 00:24:20.880 "delay_cmd_submit": true, 00:24:20.880 "transport_retry_count": 4, 00:24:20.880 "bdev_retry_count": 3, 00:24:20.880 "transport_ack_timeout": 0, 00:24:20.880 "ctrlr_loss_timeout_sec": 0, 00:24:20.880 "reconnect_delay_sec": 0, 00:24:20.880 "fast_io_fail_timeout_sec": 0, 00:24:20.880 "disable_auto_failback": false, 00:24:20.880 "generate_uuids": false, 00:24:20.880 "transport_tos": 0, 00:24:20.880 "nvme_error_stat": false, 00:24:20.880 "rdma_srq_size": 0, 00:24:20.880 "io_path_stat": false, 00:24:20.880 "allow_accel_sequence": false, 00:24:20.880 "rdma_max_cq_size": 0, 00:24:20.880 "rdma_cm_event_timeout_ms": 0, 00:24:20.880 "dhchap_digests": [ 00:24:20.880 "sha256", 00:24:20.880 "sha384", 00:24:20.880 "sha512" 00:24:20.880 ], 00:24:20.880 "dhchap_dhgroups": [ 00:24:20.880 "null", 00:24:20.880 "ffdhe2048", 00:24:20.880 "ffdhe3072", 00:24:20.880 "ffdhe4096", 00:24:20.880 "ffdhe6144", 00:24:20.880 "ffdhe8192" 00:24:20.880 ] 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_nvme_attach_controller", 00:24:20.880 "params": { 00:24:20.880 "name": "nvme0", 00:24:20.880 "trtype": "TCP", 00:24:20.880 "adrfam": "IPv4", 00:24:20.880 "traddr": "10.0.0.2", 00:24:20.880 "trsvcid": "4420", 00:24:20.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.880 "prchk_reftag": false, 00:24:20.880 "prchk_guard": false, 00:24:20.880 "ctrlr_loss_timeout_sec": 0, 00:24:20.880 "reconnect_delay_sec": 0, 00:24:20.880 "fast_io_fail_timeout_sec": 0, 00:24:20.880 "psk": "key0", 00:24:20.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:20.880 "hdgst": false, 00:24:20.880 "ddgst": false, 00:24:20.880 "multipath": "multipath" 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_nvme_set_hotplug", 00:24:20.880 "params": { 00:24:20.880 "period_us": 100000, 00:24:20.880 "enable": false 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_enable_histogram", 00:24:20.880 "params": { 00:24:20.880 "name": "nvme0n1", 00:24:20.880 "enable": true 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_wait_for_examine" 00:24:20.880 } 00:24:20.880 ] 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "subsystem": "nbd", 00:24:20.880 "config": [] 00:24:20.880 } 00:24:20.880 ] 00:24:20.880 }' 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3027461 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3027461 ']' 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3027461 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3027461 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3027461' 00:24:20.880 killing process with pid 3027461 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3027461 00:24:20.880 Received shutdown signal, test time was about 1.000000 seconds 00:24:20.880 00:24:20.880 Latency(us) 00:24:20.880 [2024-10-13T17:54:10.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.880 [2024-10-13T17:54:10.695Z] =================================================================================================================== 00:24:20.880 [2024-10-13T17:54:10.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.880 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3027461 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3027311 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3027311 ']' 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3027311 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3027311 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3027311' 00:24:21.814 killing process with pid 3027311 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3027311 00:24:21.814 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3027311 00:24:23.190 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:23.190 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:23.190 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:23.190 "subsystems": [ 00:24:23.190 { 00:24:23.190 "subsystem": "keyring", 00:24:23.190 "config": [ 00:24:23.190 { 00:24:23.190 "method": "keyring_file_add_key", 00:24:23.190 "params": { 00:24:23.190 "name": "key0", 00:24:23.190 "path": "/tmp/tmp.UL0368ELSH" 00:24:23.190 } 00:24:23.190 } 00:24:23.190 ] 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "subsystem": "iobuf", 00:24:23.190 "config": [ 00:24:23.190 { 00:24:23.190 "method": "iobuf_set_options", 00:24:23.190 "params": { 00:24:23.190 "small_pool_count": 8192, 00:24:23.190 "large_pool_count": 1024, 00:24:23.190 "small_bufsize": 8192, 00:24:23.190 "large_bufsize": 135168 00:24:23.190 } 00:24:23.190 } 00:24:23.190 ] 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "subsystem": "sock", 00:24:23.190 "config": [ 00:24:23.190 { 00:24:23.190 "method": "sock_set_default_impl", 00:24:23.190 "params": { 00:24:23.190 "impl_name": "posix" 00:24:23.190 } 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "method": "sock_impl_set_options", 00:24:23.190 "params": { 00:24:23.190 "impl_name": "ssl", 00:24:23.190 "recv_buf_size": 4096, 00:24:23.190 "send_buf_size": 4096, 00:24:23.190 "enable_recv_pipe": true, 00:24:23.190 "enable_quickack": false, 00:24:23.190 "enable_placement_id": 0, 00:24:23.190 "enable_zerocopy_send_server": true, 00:24:23.190 "enable_zerocopy_send_client": false, 00:24:23.190 "zerocopy_threshold": 0, 00:24:23.190 "tls_version": 0, 00:24:23.190 "enable_ktls": false 00:24:23.190 } 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "method": "sock_impl_set_options", 00:24:23.190 "params": { 00:24:23.190 "impl_name": "posix", 00:24:23.190 "recv_buf_size": 2097152, 00:24:23.190 "send_buf_size": 2097152, 00:24:23.190 "enable_recv_pipe": true, 00:24:23.190 "enable_quickack": false, 00:24:23.190 "enable_placement_id": 0, 00:24:23.190 "enable_zerocopy_send_server": true, 00:24:23.190 "enable_zerocopy_send_client": false, 00:24:23.190 "zerocopy_threshold": 0, 00:24:23.190 "tls_version": 0, 00:24:23.190 "enable_ktls": false 00:24:23.190 } 00:24:23.190 } 00:24:23.190 ] 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "subsystem": "vmd", 00:24:23.190 "config": [] 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "subsystem": "accel", 00:24:23.190 "config": [ 00:24:23.190 { 00:24:23.190 "method": "accel_set_options", 00:24:23.190 "params": { 00:24:23.190 "small_cache_size": 128, 00:24:23.190 "large_cache_size": 16, 00:24:23.190 "task_count": 2048, 00:24:23.190 "sequence_count": 2048, 00:24:23.190 "buf_count": 2048 00:24:23.190 } 00:24:23.190 } 00:24:23.190 ] 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "subsystem": "bdev", 00:24:23.190 "config": [ 00:24:23.190 { 00:24:23.190 "method": "bdev_set_options", 00:24:23.190 "params": { 00:24:23.190 "bdev_io_pool_size": 65535, 00:24:23.190 "bdev_io_cache_size": 256, 00:24:23.190 "bdev_auto_examine": true, 00:24:23.190 "iobuf_small_cache_size": 128, 00:24:23.190 "iobuf_large_cache_size": 16 00:24:23.190 } 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "method": "bdev_raid_set_options", 00:24:23.190 "params": { 00:24:23.190 "process_window_size_kb": 1024, 00:24:23.190 "process_max_bandwidth_mb_sec": 0 00:24:23.190 } 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "method": "bdev_iscsi_set_options", 00:24:23.190 "params": { 00:24:23.190 "timeout_sec": 30 00:24:23.190 } 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "method": "bdev_nvme_set_options", 00:24:23.190 "params": { 00:24:23.190 "action_on_timeout": "none", 00:24:23.190 "timeout_us": 0, 00:24:23.190 "timeout_admin_us": 0, 00:24:23.190 "keep_alive_timeout_ms": 10000, 00:24:23.190 "arbitration_burst": 0, 00:24:23.190 "low_priority_weight": 0, 00:24:23.190 "medium_priority_weight": 0, 00:24:23.190 "high_priority_weight": 0, 00:24:23.190 "nvme_adminq_poll_period_us": 10000, 00:24:23.190 "nvme_ioq_poll_period_us": 0, 00:24:23.190 "io_queue_requests": 0, 00:24:23.190 "delay_cmd_submit": true, 00:24:23.190 "transport_retry_count": 4, 00:24:23.190 "bdev_retry_count": 3, 00:24:23.190 "transport_ack_timeout": 0, 00:24:23.190 "ctrlr_loss_timeout_sec": 0, 00:24:23.190 "reconnect_delay_sec": 0, 00:24:23.190 "fast_io_fail_timeout_sec": 0, 00:24:23.190 "disable_auto_failback": false, 00:24:23.190 "generate_uuids": false, 00:24:23.190 "transport_tos": 0, 00:24:23.190 "nvme_error_stat": false, 00:24:23.190 "rdma_srq_size": 0, 00:24:23.190 "io_path_stat": false, 00:24:23.190 "allow_accel_sequence": false, 00:24:23.190 "rdma_max_cq_size": 0, 00:24:23.190 "rdma_cm_event_timeout_ms": 0, 00:24:23.190 "dhchap_digests": [ 00:24:23.190 "sha256", 00:24:23.190 "sha384", 00:24:23.190 "sha512" 00:24:23.190 ], 00:24:23.190 "dhchap_dhgroups": [ 00:24:23.190 "null", 00:24:23.190 "ffdhe2048", 00:24:23.190 "ffdhe3072", 00:24:23.190 "ffdhe4096", 00:24:23.190 "ffdhe6144", 00:24:23.190 "ffdhe8192" 00:24:23.190 ] 00:24:23.190 } 00:24:23.190 }, 00:24:23.190 { 00:24:23.190 "method": "bdev_nvme_set_hotplug", 00:24:23.190 "params": { 00:24:23.190 "period_us": 100000, 00:24:23.190 "enable": false 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "bdev_malloc_create", 00:24:23.191 "params": { 00:24:23.191 "name": "malloc0", 00:24:23.191 "num_blocks": 8192, 00:24:23.191 "block_size": 4096, 00:24:23.191 "physical_block_size": 4096, 00:24:23.191 "uuid": "4fe2fa26-4ae6-4e35-8f06-51235f8c81b3", 00:24:23.191 "optimal_io_boundary": 0, 00:24:23.191 "md_size": 0, 00:24:23.191 "dif_type": 0, 00:24:23.191 "dif_is_head_of_md": false, 00:24:23.191 "dif_pi_format": 0 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "bdev_wait_for_examine" 00:24:23.191 } 00:24:23.191 ] 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "subsystem": "nbd", 00:24:23.191 "config": [] 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "subsystem": "scheduler", 00:24:23.191 "config": [ 00:24:23.191 { 00:24:23.191 "method": "framework_set_scheduler", 00:24:23.191 "params": { 00:24:23.191 "name": "static" 00:24:23.191 } 00:24:23.191 } 00:24:23.191 ] 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "subsystem": "nvmf", 00:24:23.191 "config": [ 00:24:23.191 { 00:24:23.191 "method": "nvmf_set_config", 00:24:23.191 "params": { 00:24:23.191 "discovery_filter": "match_any", 00:24:23.191 "admin_cmd_passthru": { 00:24:23.191 "identify_ctrlr": false 00:24:23.191 }, 00:24:23.191 "dhchap_digests": [ 00:24:23.191 "sha256", 00:24:23.191 "sha384", 00:24:23.191 "sha512" 00:24:23.191 ], 00:24:23.191 "dhchap_dhgroups": [ 00:24:23.191 "null", 00:24:23.191 "ffdhe2048", 00:24:23.191 "ffdhe3072", 00:24:23.191 "ffdhe4096", 00:24:23.191 "ffdhe6144", 00:24:23.191 "ffdhe8192" 00:24:23.191 ] 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "nvmf_set_max_subsystems", 00:24:23.191 "params": { 00:24:23.191 "max_subsystems": 1024 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "nvmf_set_crdt", 00:24:23.191 "params": { 00:24:23.191 "crdt1": 0, 00:24:23.191 "crdt2": 0, 00:24:23.191 "crdt3": 0 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "nvmf_create_transport", 00:24:23.191 "params": { 00:24:23.191 "trtype": "TCP", 00:24:23.191 "max_queue_depth": 128, 00:24:23.191 "max_io_qpairs_per_ctrlr": 127, 00:24:23.191 "in_capsule_data_size": 4096, 00:24:23.191 "max_io_size": 131072, 00:24:23.191 "io_unit_size": 131072, 00:24:23.191 "max_aq_depth": 128, 00:24:23.191 "num_shared_buffers": 511, 00:24:23.191 "buf_cache_size": 4294967295, 00:24:23.191 "dif_insert_or_strip": false, 00:24:23.191 "zcopy": false, 00:24:23.191 "c2h_success": false, 00:24:23.191 "sock_priority": 0, 00:24:23.191 "abort_timeout_sec": 1, 00:24:23.191 "ack_timeout": 0, 00:24:23.191 "data_wr_pool_size": 0 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "nvmf_create_subsystem", 00:24:23.191 "params": { 00:24:23.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.191 "allow_any_host": false, 00:24:23.191 "serial_number": "00000000000000000000", 00:24:23.191 "model_number": "SPDK bdev Controller", 00:24:23.191 "max_namespaces": 32, 00:24:23.191 "min_cntlid": 1, 00:24:23.191 "max_cntlid": 65519, 00:24:23.191 "ana_reporting": false 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "nvmf_subsystem_add_host", 00:24:23.191 "params": { 00:24:23.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.191 "host": "nqn.2016-06.io.spdk:host1", 00:24:23.191 "psk": "key0" 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "nvmf_subsystem_add_ns", 00:24:23.191 "params": { 00:24:23.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.191 "namespace": { 00:24:23.191 "nsid": 1, 00:24:23.191 "bdev_name": "malloc0", 00:24:23.191 "nguid": "4FE2FA264AE64E358F0651235F8C81B3", 00:24:23.191 "uuid": "4fe2fa26-4ae6-4e35-8f06-51235f8c81b3", 00:24:23.191 "no_auto_visible": false 00:24:23.191 } 00:24:23.191 } 00:24:23.191 }, 00:24:23.191 { 00:24:23.191 "method": "nvmf_subsystem_add_listener", 00:24:23.191 "params": { 00:24:23.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.191 "listen_address": { 00:24:23.191 "trtype": "TCP", 00:24:23.191 "adrfam": "IPv4", 00:24:23.191 "traddr": "10.0.0.2", 00:24:23.191 "trsvcid": "4420" 00:24:23.191 }, 00:24:23.191 "secure_channel": false, 00:24:23.191 "sock_impl": "ssl" 00:24:23.191 } 00:24:23.191 } 00:24:23.191 ] 00:24:23.191 } 00:24:23.191 ] 00:24:23.191 }' 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3028640 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3028640 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3028640 ']' 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.191 19:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.191 [2024-10-13 19:54:12.755532] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:23.191 [2024-10-13 19:54:12.755679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.191 [2024-10-13 19:54:12.894935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.449 [2024-10-13 19:54:13.030817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.449 [2024-10-13 19:54:13.030899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.449 [2024-10-13 19:54:13.030925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.449 [2024-10-13 19:54:13.030962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.449 [2024-10-13 19:54:13.030979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.449 [2024-10-13 19:54:13.032646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.015 [2024-10-13 19:54:13.580970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.015 [2024-10-13 19:54:13.613007] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.015 [2024-10-13 19:54:13.613330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3028794 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3028794 /var/tmp/bdevperf.sock 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3028794 ']' 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.015 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:24.015 "subsystems": [ 00:24:24.015 { 00:24:24.015 "subsystem": "keyring", 00:24:24.015 "config": [ 00:24:24.015 { 00:24:24.015 "method": "keyring_file_add_key", 00:24:24.015 "params": { 00:24:24.015 "name": "key0", 00:24:24.015 "path": "/tmp/tmp.UL0368ELSH" 00:24:24.015 } 00:24:24.015 } 00:24:24.015 ] 00:24:24.015 }, 00:24:24.015 { 00:24:24.015 "subsystem": "iobuf", 00:24:24.015 "config": [ 00:24:24.015 { 00:24:24.015 "method": "iobuf_set_options", 00:24:24.015 "params": { 00:24:24.015 "small_pool_count": 8192, 00:24:24.015 "large_pool_count": 1024, 00:24:24.015 "small_bufsize": 8192, 00:24:24.015 "large_bufsize": 135168 00:24:24.015 } 00:24:24.015 } 00:24:24.015 ] 00:24:24.015 }, 00:24:24.015 { 00:24:24.016 "subsystem": "sock", 00:24:24.016 "config": [ 00:24:24.016 { 00:24:24.016 "method": "sock_set_default_impl", 00:24:24.016 "params": { 00:24:24.016 "impl_name": "posix" 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "sock_impl_set_options", 00:24:24.016 "params": { 00:24:24.016 "impl_name": "ssl", 00:24:24.016 "recv_buf_size": 4096, 00:24:24.016 "send_buf_size": 4096, 00:24:24.016 "enable_recv_pipe": true, 00:24:24.016 "enable_quickack": false, 00:24:24.016 "enable_placement_id": 0, 00:24:24.016 "enable_zerocopy_send_server": true, 00:24:24.016 "enable_zerocopy_send_client": false, 00:24:24.016 "zerocopy_threshold": 0, 00:24:24.016 "tls_version": 0, 00:24:24.016 "enable_ktls": false 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "sock_impl_set_options", 00:24:24.016 "params": { 00:24:24.016 "impl_name": "posix", 00:24:24.016 "recv_buf_size": 2097152, 00:24:24.016 "send_buf_size": 2097152, 00:24:24.016 "enable_recv_pipe": true, 00:24:24.016 "enable_quickack": false, 00:24:24.016 "enable_placement_id": 0, 00:24:24.016 "enable_zerocopy_send_server": true, 00:24:24.016 "enable_zerocopy_send_client": false, 00:24:24.016 "zerocopy_threshold": 0, 00:24:24.016 "tls_version": 0, 00:24:24.016 "enable_ktls": false 00:24:24.016 } 00:24:24.016 } 00:24:24.016 ] 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "subsystem": "vmd", 00:24:24.016 "config": [] 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "subsystem": "accel", 00:24:24.016 "config": [ 00:24:24.016 { 00:24:24.016 "method": "accel_set_options", 00:24:24.016 "params": { 00:24:24.016 "small_cache_size": 128, 00:24:24.016 "large_cache_size": 16, 00:24:24.016 "task_count": 2048, 00:24:24.016 "sequence_count": 2048, 00:24:24.016 "buf_count": 2048 00:24:24.016 } 00:24:24.016 } 00:24:24.016 ] 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "subsystem": "bdev", 00:24:24.016 "config": [ 00:24:24.016 { 00:24:24.016 "method": "bdev_set_options", 00:24:24.016 "params": { 00:24:24.016 "bdev_io_pool_size": 65535, 00:24:24.016 "bdev_io_cache_size": 256, 00:24:24.016 "bdev_auto_examine": true, 00:24:24.016 "iobuf_small_cache_size": 128, 00:24:24.016 "iobuf_large_cache_size": 16 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "bdev_raid_set_options", 00:24:24.016 "params": { 00:24:24.016 "process_window_size_kb": 1024, 00:24:24.016 "process_max_bandwidth_mb_sec": 0 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "bdev_iscsi_set_options", 00:24:24.016 "params": { 00:24:24.016 "timeout_sec": 30 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "bdev_nvme_set_options", 00:24:24.016 "params": { 00:24:24.016 "action_on_timeout": "none", 00:24:24.016 "timeout_us": 0, 00:24:24.016 "timeout_admin_us": 0, 00:24:24.016 "keep_alive_timeout_ms": 10000, 00:24:24.016 "arbitration_burst": 0, 00:24:24.016 "low_priority_weight": 0, 00:24:24.016 "medium_priority_weight": 0, 00:24:24.016 "high_priority_weight": 0, 00:24:24.016 "nvme_adminq_poll_period_us": 10000, 00:24:24.016 "nvme_ioq_poll_period_us": 0, 00:24:24.016 "io_queue_requests": 512, 00:24:24.016 "delay_cmd_submit": true, 00:24:24.016 "transport_retry_count": 4, 00:24:24.016 "bdev_retry_count": 3, 00:24:24.016 "transport_ack_timeout": 0, 00:24:24.016 "ctrlr_loss_timeout_sec": 0, 00:24:24.016 "reconnect_delay_sec": 0, 00:24:24.016 "fast_io_fail_timeout_sec": 0, 00:24:24.016 "disable_auto_failback": false, 00:24:24.016 "generate_uuids": false, 00:24:24.016 "transport_tos": 0, 00:24:24.016 "nvme_error_stat": false, 00:24:24.016 "rdma_srq_size": 0, 00:24:24.016 "io_path_stat": false, 00:24:24.016 "allow_accel_sequence": false, 00:24:24.016 "rdma_max_cq_size": 0, 00:24:24.016 "rdma_cm_event_timeout_ms": 0, 00:24:24.016 "dhchap_digests": [ 00:24:24.016 "sha256", 00:24:24.016 "sha384", 00:24:24.016 "sha512" 00:24:24.016 ], 00:24:24.016 "dhchap_dhgroups": [ 00:24:24.016 "null", 00:24:24.016 "ffdhe2048", 00:24:24.016 "ffdhe3072", 00:24:24.016 "ffdhe4096", 00:24:24.016 "ffdhe6144", 00:24:24.016 "ffdhe8192" 00:24:24.016 ] 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "bdev_nvme_attach_controller", 00:24:24.016 "params": { 00:24:24.016 "name": "nvme0", 00:24:24.016 "trtype": "TCP", 00:24:24.016 "adrfam": "IPv4", 00:24:24.016 "traddr": "10.0.0.2", 00:24:24.016 "trsvcid": "4420", 00:24:24.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.016 "prchk_reftag": false, 00:24:24.016 "prchk_guard": false, 00:24:24.016 "ctrlr_loss_timeout_sec": 0, 00:24:24.016 "reconnect_delay_sec": 0, 00:24:24.016 "fast_io_fail_timeout_sec": 0, 00:24:24.016 "psk": "key0", 00:24:24.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:24.016 "hdgst": false, 00:24:24.016 "ddgst": false, 00:24:24.016 "multipath": "multipath" 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "bdev_nvme_set_hotplug", 00:24:24.016 "params": { 00:24:24.016 "period_us": 100000, 00:24:24.016 "enable": false 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "bdev_enable_histogram", 00:24:24.016 "params": { 00:24:24.016 "name": "nvme0n1", 00:24:24.016 "enable": true 00:24:24.016 } 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "method": "bdev_wait_for_examine" 00:24:24.016 } 00:24:24.016 ] 00:24:24.016 }, 00:24:24.016 { 00:24:24.016 "subsystem": "nbd", 00:24:24.016 "config": [] 00:24:24.016 } 00:24:24.016 ] 00:24:24.016 }' 00:24:24.016 [2024-10-13 19:54:13.794454] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:24.016 [2024-10-13 19:54:13.794590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028794 ] 00:24:24.275 [2024-10-13 19:54:13.919751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.275 [2024-10-13 19:54:14.047728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.847 [2024-10-13 19:54:14.491831] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.153 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.153 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:25.153 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.154 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:25.438 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.438 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.438 Running I/O for 1 seconds... 00:24:26.372 2578.00 IOPS, 10.07 MiB/s 00:24:26.372 Latency(us) 00:24:26.372 [2024-10-13T17:54:16.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.372 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:26.372 Verification LBA range: start 0x0 length 0x2000 00:24:26.372 nvme0n1 : 1.03 2630.58 10.28 0.00 0.00 48059.60 8641.04 48739.37 00:24:26.372 [2024-10-13T17:54:16.187Z] =================================================================================================================== 00:24:26.372 [2024-10-13T17:54:16.187Z] Total : 2630.58 10.28 0.00 0.00 48059.60 8641.04 48739.37 00:24:26.372 { 00:24:26.372 "results": [ 00:24:26.372 { 00:24:26.372 "job": "nvme0n1", 00:24:26.372 "core_mask": "0x2", 00:24:26.372 "workload": "verify", 00:24:26.372 "status": "finished", 00:24:26.372 "verify_range": { 00:24:26.372 "start": 0, 00:24:26.372 "length": 8192 00:24:26.372 }, 00:24:26.372 "queue_depth": 128, 00:24:26.372 "io_size": 4096, 00:24:26.372 "runtime": 1.028669, 00:24:26.372 "iops": 2630.5837932318364, 00:24:26.372 "mibps": 10.27571794231186, 00:24:26.372 "io_failed": 0, 00:24:26.372 "io_timeout": 0, 00:24:26.372 "avg_latency_us": 48059.59878459391, 00:24:26.372 "min_latency_us": 8641.042962962963, 00:24:26.372 "max_latency_us": 48739.36592592593 00:24:26.372 } 00:24:26.372 ], 00:24:26.372 "core_count": 1 00:24:26.372 } 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:26.630 nvmf_trace.0 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3028794 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3028794 ']' 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3028794 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3028794 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3028794' 00:24:26.630 killing process with pid 3028794 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3028794 00:24:26.630 Received shutdown signal, test time was about 1.000000 seconds 00:24:26.630 00:24:26.630 Latency(us) 00:24:26.630 [2024-10-13T17:54:16.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.630 [2024-10-13T17:54:16.445Z] =================================================================================================================== 00:24:26.630 [2024-10-13T17:54:16.445Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.630 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3028794 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.564 rmmod nvme_tcp 00:24:27.564 rmmod nvme_fabrics 00:24:27.564 rmmod nvme_keyring 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 3028640 ']' 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 3028640 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3028640 ']' 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3028640 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3028640 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3028640' 00:24:27.564 killing process with pid 3028640 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3028640 00:24:27.564 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3028640 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.938 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9zVaQYKOCA /tmp/tmp.C52RPxxcZB /tmp/tmp.UL0368ELSH 00:24:30.843 00:24:30.843 real 1m52.874s 00:24:30.843 user 3m8.242s 00:24:30.843 sys 0m26.657s 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.843 ************************************ 00:24:30.843 END TEST nvmf_tls 00:24:30.843 ************************************ 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:30.843 ************************************ 00:24:30.843 START TEST nvmf_fips 00:24:30.843 ************************************ 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:30.843 * Looking for test storage... 00:24:30.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:30.843 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.102 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:31.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.102 --rc genhtml_branch_coverage=1 00:24:31.102 --rc genhtml_function_coverage=1 00:24:31.102 --rc genhtml_legend=1 00:24:31.102 --rc geninfo_all_blocks=1 00:24:31.102 --rc geninfo_unexecuted_blocks=1 00:24:31.102 00:24:31.102 ' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:31.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.103 --rc genhtml_branch_coverage=1 00:24:31.103 --rc genhtml_function_coverage=1 00:24:31.103 --rc genhtml_legend=1 00:24:31.103 --rc geninfo_all_blocks=1 00:24:31.103 --rc geninfo_unexecuted_blocks=1 00:24:31.103 00:24:31.103 ' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:31.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.103 --rc genhtml_branch_coverage=1 00:24:31.103 --rc genhtml_function_coverage=1 00:24:31.103 --rc genhtml_legend=1 00:24:31.103 --rc geninfo_all_blocks=1 00:24:31.103 --rc geninfo_unexecuted_blocks=1 00:24:31.103 00:24:31.103 ' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:31.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.103 --rc genhtml_branch_coverage=1 00:24:31.103 --rc genhtml_function_coverage=1 00:24:31.103 --rc genhtml_legend=1 00:24:31.103 --rc geninfo_all_blocks=1 00:24:31.103 --rc geninfo_unexecuted_blocks=1 00:24:31.103 00:24:31.103 ' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.103 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:31.104 Error setting digest 00:24:31.104 40E2D73E117F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:31.104 40E2D73E117F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.104 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:33.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:33.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:33.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:33.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.636 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:24:33.636 00:24:33.636 --- 10.0.0.2 ping statistics --- 00:24:33.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.636 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:24:33.636 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:33.637 00:24:33.637 --- 10.0.0.1 ping statistics --- 00:24:33.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.637 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=3031301 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 3031301 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3031301 ']' 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.637 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.637 [2024-10-13 19:54:23.239087] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:33.637 [2024-10-13 19:54:23.239252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.637 [2024-10-13 19:54:23.379353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.895 [2024-10-13 19:54:23.517059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.895 [2024-10-13 19:54:23.517151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.895 [2024-10-13 19:54:23.517178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.895 [2024-10-13 19:54:23.517202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.895 [2024-10-13 19:54:23.517222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.895 [2024-10-13 19:54:23.518908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.461 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.461 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:34.461 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:34.461 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.461 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:34.461 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.7cV 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.7cV 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.7cV 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.7cV 00:24:34.462 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:34.720 [2024-10-13 19:54:24.502147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.720 [2024-10-13 19:54:24.518073] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:34.720 [2024-10-13 19:54:24.518389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.979 malloc0 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3031544 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3031544 /var/tmp/bdevperf.sock 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3031544 ']' 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.979 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:34.979 [2024-10-13 19:54:24.743986] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:34.979 [2024-10-13 19:54:24.744157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031544 ] 00:24:35.238 [2024-10-13 19:54:24.889927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.238 [2024-10-13 19:54:25.016234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.172 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.172 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:36.172 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.7cV 00:24:36.172 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:36.430 [2024-10-13 19:54:26.180609] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.687 TLSTESTn1 00:24:36.687 19:54:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.687 Running I/O for 10 seconds... 00:24:38.993 2650.00 IOPS, 10.35 MiB/s [2024-10-13T17:54:29.740Z] 2703.00 IOPS, 10.56 MiB/s [2024-10-13T17:54:30.673Z] 2705.00 IOPS, 10.57 MiB/s [2024-10-13T17:54:31.606Z] 2705.50 IOPS, 10.57 MiB/s [2024-10-13T17:54:32.539Z] 2709.80 IOPS, 10.59 MiB/s [2024-10-13T17:54:33.472Z] 2712.00 IOPS, 10.59 MiB/s [2024-10-13T17:54:34.845Z] 2715.71 IOPS, 10.61 MiB/s [2024-10-13T17:54:35.778Z] 2719.00 IOPS, 10.62 MiB/s [2024-10-13T17:54:36.711Z] 2719.33 IOPS, 10.62 MiB/s [2024-10-13T17:54:36.711Z] 2722.00 IOPS, 10.63 MiB/s 00:24:46.896 Latency(us) 00:24:46.896 [2024-10-13T17:54:36.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.896 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:46.896 Verification LBA range: start 0x0 length 0x2000 00:24:46.896 TLSTESTn1 : 10.03 2727.18 10.65 0.00 0.00 46848.98 9369.22 35923.44 00:24:46.896 [2024-10-13T17:54:36.711Z] =================================================================================================================== 00:24:46.896 [2024-10-13T17:54:36.711Z] Total : 2727.18 10.65 0.00 0.00 46848.98 9369.22 35923.44 00:24:46.896 { 00:24:46.896 "results": [ 00:24:46.896 { 00:24:46.896 "job": "TLSTESTn1", 00:24:46.896 "core_mask": "0x4", 00:24:46.896 "workload": "verify", 00:24:46.896 "status": "finished", 00:24:46.896 "verify_range": { 00:24:46.896 "start": 0, 00:24:46.896 "length": 8192 00:24:46.896 }, 00:24:46.896 "queue_depth": 128, 00:24:46.896 "io_size": 4096, 00:24:46.896 "runtime": 10.026483, 00:24:46.896 "iops": 2727.1776155208163, 00:24:46.896 "mibps": 10.653037560628189, 00:24:46.896 "io_failed": 0, 00:24:46.896 "io_timeout": 0, 00:24:46.896 "avg_latency_us": 46848.98243460547, 00:24:46.896 "min_latency_us": 9369.22074074074, 00:24:46.896 "max_latency_us": 35923.43703703704 00:24:46.896 } 00:24:46.896 ], 00:24:46.896 "core_count": 1 00:24:46.896 } 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:46.896 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:46.897 nvmf_trace.0 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3031544 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3031544 ']' 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3031544 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3031544 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3031544' 00:24:46.897 killing process with pid 3031544 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3031544 00:24:46.897 Received shutdown signal, test time was about 10.000000 seconds 00:24:46.897 00:24:46.897 Latency(us) 00:24:46.897 [2024-10-13T17:54:36.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.897 [2024-10-13T17:54:36.712Z] =================================================================================================================== 00:24:46.897 [2024-10-13T17:54:36.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.897 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3031544 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.831 rmmod nvme_tcp 00:24:47.831 rmmod nvme_fabrics 00:24:47.831 rmmod nvme_keyring 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 3031301 ']' 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 3031301 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3031301 ']' 00:24:47.831 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3031301 00:24:47.832 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:47.832 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.832 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3031301 00:24:47.832 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:47.832 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:47.832 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3031301' 00:24:47.832 killing process with pid 3031301 00:24:47.832 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3031301 00:24:47.832 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3031301 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.207 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.109 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.109 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.7cV 00:24:51.109 00:24:51.109 real 0m20.293s 00:24:51.109 user 0m27.995s 00:24:51.109 sys 0m5.196s 00:24:51.109 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:51.109 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:51.109 ************************************ 00:24:51.109 END TEST nvmf_fips 00:24:51.109 ************************************ 00:24:51.109 19:54:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:51.109 19:54:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:51.109 19:54:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:51.109 19:54:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:51.367 ************************************ 00:24:51.367 START TEST nvmf_control_msg_list 00:24:51.367 ************************************ 00:24:51.367 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:51.367 * Looking for test storage... 00:24:51.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:51.367 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:51.367 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:51.367 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:51.367 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.368 --rc genhtml_branch_coverage=1 00:24:51.368 --rc genhtml_function_coverage=1 00:24:51.368 --rc genhtml_legend=1 00:24:51.368 --rc geninfo_all_blocks=1 00:24:51.368 --rc geninfo_unexecuted_blocks=1 00:24:51.368 00:24:51.368 ' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.368 --rc genhtml_branch_coverage=1 00:24:51.368 --rc genhtml_function_coverage=1 00:24:51.368 --rc genhtml_legend=1 00:24:51.368 --rc geninfo_all_blocks=1 00:24:51.368 --rc geninfo_unexecuted_blocks=1 00:24:51.368 00:24:51.368 ' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.368 --rc genhtml_branch_coverage=1 00:24:51.368 --rc genhtml_function_coverage=1 00:24:51.368 --rc genhtml_legend=1 00:24:51.368 --rc geninfo_all_blocks=1 00:24:51.368 --rc geninfo_unexecuted_blocks=1 00:24:51.368 00:24:51.368 ' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.368 --rc genhtml_branch_coverage=1 00:24:51.368 --rc genhtml_function_coverage=1 00:24:51.368 --rc genhtml_legend=1 00:24:51.368 --rc geninfo_all_blocks=1 00:24:51.368 --rc geninfo_unexecuted_blocks=1 00:24:51.368 00:24:51.368 ' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.368 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:53.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:53.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:53.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:53.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:24:53.902 00:24:53.902 --- 10.0.0.2 ping statistics --- 00:24:53.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.902 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:24:53.902 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:24:53.902 00:24:53.903 --- 10.0.0.1 ping statistics --- 00:24:53.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.903 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=3035100 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 3035100 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3035100 ']' 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.903 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:53.903 [2024-10-13 19:54:43.396220] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:53.903 [2024-10-13 19:54:43.396362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.903 [2024-10-13 19:54:43.535488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.903 [2024-10-13 19:54:43.660854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.903 [2024-10-13 19:54:43.660933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.903 [2024-10-13 19:54:43.660954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.903 [2024-10-13 19:54:43.660974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.903 [2024-10-13 19:54:43.660990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.903 [2024-10-13 19:54:43.662453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.837 [2024-10-13 19:54:44.405451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.837 Malloc0 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.837 [2024-10-13 19:54:44.476466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3035257 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3035258 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3035259 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3035257 00:24:54.837 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.837 [2024-10-13 19:54:44.597074] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:54.837 [2024-10-13 19:54:44.597557] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:54.837 [2024-10-13 19:54:44.597997] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:56.209 Initializing NVMe Controllers 00:24:56.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:56.209 Initialization complete. Launching workers. 00:24:56.209 ======================================================== 00:24:56.209 Latency(us) 00:24:56.209 Device Information : IOPS MiB/s Average min max 00:24:56.209 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4628.00 18.08 215.49 200.79 1796.17 00:24:56.209 ======================================================== 00:24:56.209 Total : 4628.00 18.08 215.49 200.79 1796.17 00:24:56.209 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3035258 00:24:56.209 Initializing NVMe Controllers 00:24:56.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:56.209 Initialization complete. Launching workers. 00:24:56.209 ======================================================== 00:24:56.209 Latency(us) 00:24:56.209 Device Information : IOPS MiB/s Average min max 00:24:56.209 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40926.64 40552.84 41803.43 00:24:56.209 ======================================================== 00:24:56.209 Total : 25.00 0.10 40926.64 40552.84 41803.43 00:24:56.209 00:24:56.209 Initializing NVMe Controllers 00:24:56.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:56.209 Initialization complete. Launching workers. 00:24:56.209 ======================================================== 00:24:56.209 Latency(us) 00:24:56.209 Device Information : IOPS MiB/s Average min max 00:24:56.209 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40987.20 40769.38 41970.25 00:24:56.209 ======================================================== 00:24:56.209 Total : 25.00 0.10 40987.20 40769.38 41970.25 00:24:56.209 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3035259 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.209 rmmod nvme_tcp 00:24:56.209 rmmod nvme_fabrics 00:24:56.209 rmmod nvme_keyring 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 3035100 ']' 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 3035100 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3035100 ']' 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3035100 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3035100 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3035100' 00:24:56.209 killing process with pid 3035100 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3035100 00:24:56.209 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3035100 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.585 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.499 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.499 00:24:59.499 real 0m8.297s 00:24:59.499 user 0m8.073s 00:24:59.499 sys 0m2.805s 00:24:59.499 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:59.499 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.499 ************************************ 00:24:59.499 END TEST nvmf_control_msg_list 00:24:59.499 ************************************ 00:24:59.499 19:54:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:59.499 19:54:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:59.499 19:54:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:59.499 19:54:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:59.499 ************************************ 00:24:59.499 START TEST nvmf_wait_for_buf 00:24:59.499 ************************************ 00:24:59.499 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:59.804 * Looking for test storage... 00:24:59.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:59.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.804 --rc genhtml_branch_coverage=1 00:24:59.804 --rc genhtml_function_coverage=1 00:24:59.804 --rc genhtml_legend=1 00:24:59.804 --rc geninfo_all_blocks=1 00:24:59.804 --rc geninfo_unexecuted_blocks=1 00:24:59.804 00:24:59.804 ' 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:59.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.804 --rc genhtml_branch_coverage=1 00:24:59.804 --rc genhtml_function_coverage=1 00:24:59.804 --rc genhtml_legend=1 00:24:59.804 --rc geninfo_all_blocks=1 00:24:59.804 --rc geninfo_unexecuted_blocks=1 00:24:59.804 00:24:59.804 ' 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:59.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.804 --rc genhtml_branch_coverage=1 00:24:59.804 --rc genhtml_function_coverage=1 00:24:59.804 --rc genhtml_legend=1 00:24:59.804 --rc geninfo_all_blocks=1 00:24:59.804 --rc geninfo_unexecuted_blocks=1 00:24:59.804 00:24:59.804 ' 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:59.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.804 --rc genhtml_branch_coverage=1 00:24:59.804 --rc genhtml_function_coverage=1 00:24:59.804 --rc genhtml_legend=1 00:24:59.804 --rc geninfo_all_blocks=1 00:24:59.804 --rc geninfo_unexecuted_blocks=1 00:24:59.804 00:24:59.804 ' 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.804 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.805 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.731 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:01.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:01.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:01.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:01.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.732 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:25:01.991 00:25:01.991 --- 10.0.0.2 ping statistics --- 00:25:01.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.991 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:25:01.991 00:25:01.991 --- 10.0.0.1 ping statistics --- 00:25:01.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.991 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=3037473 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 3037473 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3037473 ']' 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.991 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.991 [2024-10-13 19:54:51.691837] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:25:01.991 [2024-10-13 19:54:51.691979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.249 [2024-10-13 19:54:51.825893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.250 [2024-10-13 19:54:51.956233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.250 [2024-10-13 19:54:51.956330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.250 [2024-10-13 19:54:51.956357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.250 [2024-10-13 19:54:51.956404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.250 [2024-10-13 19:54:51.956428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.250 [2024-10-13 19:54:51.958102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.184 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.442 Malloc0 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.442 [2024-10-13 19:54:53.083927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.442 [2024-10-13 19:54:53.108249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.442 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:03.442 [2024-10-13 19:54:53.234622] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:05.343 Initializing NVMe Controllers 00:25:05.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:05.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:05.343 Initialization complete. Launching workers. 00:25:05.343 ======================================================== 00:25:05.343 Latency(us) 00:25:05.343 Device Information : IOPS MiB/s Average min max 00:25:05.343 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 84.00 10.50 49527.86 7884.55 151562.12 00:25:05.343 ======================================================== 00:25:05.343 Total : 84.00 10.50 49527.86 7884.55 151562.12 00:25:05.343 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1318 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1318 -eq 0 ]] 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:05.343 rmmod nvme_tcp 00:25:05.343 rmmod nvme_fabrics 00:25:05.343 rmmod nvme_keyring 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 3037473 ']' 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 3037473 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3037473 ']' 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3037473 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3037473 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3037473' 00:25:05.343 killing process with pid 3037473 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3037473 00:25:05.343 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3037473 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.279 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.813 00:25:08.813 real 0m8.768s 00:25:08.813 user 0m5.331s 00:25:08.813 sys 0m2.239s 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.813 ************************************ 00:25:08.813 END TEST nvmf_wait_for_buf 00:25:08.813 ************************************ 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:08.813 ************************************ 00:25:08.813 START TEST nvmf_fuzz 00:25:08.813 ************************************ 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:08.813 * Looking for test storage... 00:25:08.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.813 --rc genhtml_branch_coverage=1 00:25:08.813 --rc genhtml_function_coverage=1 00:25:08.813 --rc genhtml_legend=1 00:25:08.813 --rc geninfo_all_blocks=1 00:25:08.813 --rc geninfo_unexecuted_blocks=1 00:25:08.813 00:25:08.813 ' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.813 --rc genhtml_branch_coverage=1 00:25:08.813 --rc genhtml_function_coverage=1 00:25:08.813 --rc genhtml_legend=1 00:25:08.813 --rc geninfo_all_blocks=1 00:25:08.813 --rc geninfo_unexecuted_blocks=1 00:25:08.813 00:25:08.813 ' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.813 --rc genhtml_branch_coverage=1 00:25:08.813 --rc genhtml_function_coverage=1 00:25:08.813 --rc genhtml_legend=1 00:25:08.813 --rc geninfo_all_blocks=1 00:25:08.813 --rc geninfo_unexecuted_blocks=1 00:25:08.813 00:25:08.813 ' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.813 --rc genhtml_branch_coverage=1 00:25:08.813 --rc genhtml_function_coverage=1 00:25:08.813 --rc genhtml_legend=1 00:25:08.813 --rc geninfo_all_blocks=1 00:25:08.813 --rc geninfo_unexecuted_blocks=1 00:25:08.813 00:25:08.813 ' 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.813 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.814 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.716 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:10.717 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.975 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.975 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.975 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.975 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:10.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:25:10.976 00:25:10.976 --- 10.0.0.2 ping statistics --- 00:25:10.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.976 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:25:10.976 00:25:10.976 --- 10.0.0.1 ping statistics --- 00:25:10.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.976 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3039953 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3039953 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3039953 ']' 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:10.976 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.909 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.175 Malloc0 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:12.175 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:44.240 Fuzzing completed. Shutting down the fuzz application 00:25:44.240 00:25:44.240 Dumping successful admin opcodes: 00:25:44.240 8, 9, 10, 24, 00:25:44.240 Dumping successful io opcodes: 00:25:44.240 0, 9, 00:25:44.240 NS: 0x2000008efec0 I/O qp, Total commands completed: 327851, total successful commands: 1942, random_seed: 1021136640 00:25:44.240 NS: 0x2000008efec0 admin qp, Total commands completed: 41296, total successful commands: 337, random_seed: 1800772992 00:25:44.240 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:44.498 Fuzzing completed. Shutting down the fuzz application 00:25:44.498 00:25:44.498 Dumping successful admin opcodes: 00:25:44.498 24, 00:25:44.498 Dumping successful io opcodes: 00:25:44.498 00:25:44.498 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1501426278 00:25:44.498 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1501644126 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.498 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.498 rmmod nvme_tcp 00:25:44.756 rmmod nvme_fabrics 00:25:44.756 rmmod nvme_keyring 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 3039953 ']' 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 3039953 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3039953 ']' 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3039953 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3039953 00:25:44.756 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:44.757 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:44.757 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3039953' 00:25:44.757 killing process with pid 3039953 00:25:44.757 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3039953 00:25:44.757 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3039953 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.131 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.036 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.036 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:48.036 00:25:48.036 real 0m39.707s 00:25:48.036 user 0m56.660s 00:25:48.036 sys 0m13.351s 00:25:48.036 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:48.036 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.036 ************************************ 00:25:48.036 END TEST nvmf_fuzz 00:25:48.036 ************************************ 00:25:48.036 19:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:48.036 19:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:48.036 19:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:48.036 19:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:48.295 ************************************ 00:25:48.295 START TEST nvmf_multiconnection 00:25:48.295 ************************************ 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:48.295 * Looking for test storage... 00:25:48.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.295 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:48.295 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:48.295 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.295 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.295 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:48.295 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:48.295 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.295 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:48.295 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:48.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.296 --rc genhtml_branch_coverage=1 00:25:48.296 --rc genhtml_function_coverage=1 00:25:48.296 --rc genhtml_legend=1 00:25:48.296 --rc geninfo_all_blocks=1 00:25:48.296 --rc geninfo_unexecuted_blocks=1 00:25:48.296 00:25:48.296 ' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:48.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.296 --rc genhtml_branch_coverage=1 00:25:48.296 --rc genhtml_function_coverage=1 00:25:48.296 --rc genhtml_legend=1 00:25:48.296 --rc geninfo_all_blocks=1 00:25:48.296 --rc geninfo_unexecuted_blocks=1 00:25:48.296 00:25:48.296 ' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:48.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.296 --rc genhtml_branch_coverage=1 00:25:48.296 --rc genhtml_function_coverage=1 00:25:48.296 --rc genhtml_legend=1 00:25:48.296 --rc geninfo_all_blocks=1 00:25:48.296 --rc geninfo_unexecuted_blocks=1 00:25:48.296 00:25:48.296 ' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:48.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.296 --rc genhtml_branch_coverage=1 00:25:48.296 --rc genhtml_function_coverage=1 00:25:48.296 --rc genhtml_legend=1 00:25:48.296 --rc geninfo_all_blocks=1 00:25:48.296 --rc geninfo_unexecuted_blocks=1 00:25:48.296 00:25:48.296 ' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:48.296 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:48.297 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:50.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.198 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:50.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:50.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:50.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.199 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:25:50.458 00:25:50.458 --- 10.0.0.2 ping statistics --- 00:25:50.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.458 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:25:50.458 00:25:50.458 --- 10.0.0.1 ping statistics --- 00:25:50.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.458 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=3045831 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 3045831 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3045831 ']' 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.458 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.458 [2024-10-13 19:55:40.228893] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:25:50.458 [2024-10-13 19:55:40.229035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.717 [2024-10-13 19:55:40.365528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.717 [2024-10-13 19:55:40.503403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.717 [2024-10-13 19:55:40.503500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.717 [2024-10-13 19:55:40.503526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.717 [2024-10-13 19:55:40.503551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.717 [2024-10-13 19:55:40.503571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.717 [2024-10-13 19:55:40.506454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.717 [2024-10-13 19:55:40.506490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.717 [2024-10-13 19:55:40.506540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.717 [2024-10-13 19:55:40.506547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.651 [2024-10-13 19:55:41.226904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.651 Malloc1 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.651 [2024-10-13 19:55:41.342687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.651 Malloc2 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.651 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.652 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 Malloc3 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 Malloc4 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 Malloc5 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.910 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 Malloc6 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 Malloc7 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.169 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 Malloc8 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 Malloc9 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 Malloc10 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.428 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.687 Malloc11 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.687 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:53.253 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:53.253 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:53.253 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.253 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:53.253 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:55.778 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:55.778 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:55.778 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:55.778 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:55.778 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.778 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:55.778 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.778 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:56.036 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:56.036 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:56.036 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.036 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:56.036 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:57.933 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:57.933 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:57.933 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:57.933 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:57.933 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.933 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:57.933 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.933 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:58.872 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:58.872 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:58.872 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.872 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:58.872 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:00.771 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:00.771 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:00.771 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:00.771 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:00.771 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.771 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:00.771 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.771 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:01.337 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:01.337 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:01.337 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.337 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:01.337 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:03.314 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:03.314 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:03.314 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:03.314 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:03.314 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.314 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:03.314 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.572 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:04.138 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:04.138 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:04.138 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.138 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:04.138 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:06.665 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:06.665 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:06.665 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:06.665 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:06.665 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.665 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:06.665 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.665 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:06.924 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:06.924 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:06.924 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.924 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:06.924 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:09.459 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:09.459 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:09.459 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:09.459 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:09.459 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.459 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:09.459 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.459 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:09.717 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:09.717 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:09.717 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.717 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:09.717 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:12.243 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:12.243 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:12.243 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:12.243 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:12.243 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.243 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:12.243 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.244 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:12.501 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:12.501 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:12.501 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.501 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:12.501 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:15.026 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:15.026 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:15.026 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:15.026 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:15.026 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.026 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:15.026 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.026 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:15.592 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:15.592 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:15.592 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.592 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:15.593 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:17.490 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:17.490 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:17.490 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:17.490 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:17.490 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.490 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:17.490 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.490 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:18.422 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:18.422 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:18.422 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.422 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:18.423 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:20.319 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:20.319 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:20.319 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:20.577 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:20.577 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.577 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:20.577 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.577 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:21.510 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:21.510 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:21.510 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.510 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:21.510 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:23.407 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:23.407 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:23.407 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:23.407 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:23.407 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.407 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:23.407 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:23.407 [global] 00:26:23.407 thread=1 00:26:23.407 invalidate=1 00:26:23.407 rw=read 00:26:23.407 time_based=1 00:26:23.407 runtime=10 00:26:23.407 ioengine=libaio 00:26:23.407 direct=1 00:26:23.407 bs=262144 00:26:23.407 iodepth=64 00:26:23.407 norandommap=1 00:26:23.407 numjobs=1 00:26:23.407 00:26:23.407 [job0] 00:26:23.407 filename=/dev/nvme0n1 00:26:23.407 [job1] 00:26:23.407 filename=/dev/nvme10n1 00:26:23.407 [job2] 00:26:23.407 filename=/dev/nvme1n1 00:26:23.407 [job3] 00:26:23.407 filename=/dev/nvme2n1 00:26:23.407 [job4] 00:26:23.407 filename=/dev/nvme3n1 00:26:23.407 [job5] 00:26:23.407 filename=/dev/nvme4n1 00:26:23.407 [job6] 00:26:23.407 filename=/dev/nvme5n1 00:26:23.407 [job7] 00:26:23.407 filename=/dev/nvme6n1 00:26:23.407 [job8] 00:26:23.407 filename=/dev/nvme7n1 00:26:23.407 [job9] 00:26:23.407 filename=/dev/nvme8n1 00:26:23.407 [job10] 00:26:23.407 filename=/dev/nvme9n1 00:26:23.687 Could not set queue depth (nvme0n1) 00:26:23.687 Could not set queue depth (nvme10n1) 00:26:23.687 Could not set queue depth (nvme1n1) 00:26:23.687 Could not set queue depth (nvme2n1) 00:26:23.687 Could not set queue depth (nvme3n1) 00:26:23.687 Could not set queue depth (nvme4n1) 00:26:23.687 Could not set queue depth (nvme5n1) 00:26:23.687 Could not set queue depth (nvme6n1) 00:26:23.687 Could not set queue depth (nvme7n1) 00:26:23.687 Could not set queue depth (nvme8n1) 00:26:23.687 Could not set queue depth (nvme9n1) 00:26:23.687 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.687 fio-3.35 00:26:23.687 Starting 11 threads 00:26:35.883 00:26:35.883 job0: (groupid=0, jobs=1): err= 0: pid=3050212: Sun Oct 13 19:56:24 2024 00:26:35.883 read: IOPS=422, BW=106MiB/s (111MB/s)(1070MiB/10126msec) 00:26:35.883 slat (usec): min=8, max=400815, avg=1603.85, stdev=10860.15 00:26:35.883 clat (msec): min=2, max=712, avg=149.66, stdev=132.50 00:26:35.883 lat (msec): min=2, max=938, avg=151.26, stdev=134.15 00:26:35.883 clat percentiles (msec): 00:26:35.883 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 56], 20.00th=[ 58], 00:26:35.883 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 74], 60.00th=[ 136], 00:26:35.883 | 70.00th=[ 186], 80.00th=[ 255], 90.00th=[ 317], 95.00th=[ 388], 00:26:35.883 | 99.00th=[ 625], 99.50th=[ 642], 99.90th=[ 709], 99.95th=[ 709], 00:26:35.883 | 99.99th=[ 709] 00:26:35.883 bw ( KiB/s): min=31744, max=273920, per=14.05%, avg=107932.40, stdev=85045.05, samples=20 00:26:35.883 iops : min= 124, max= 1070, avg=421.50, stdev=332.29, samples=20 00:26:35.883 lat (msec) : 4=0.05%, 10=1.61%, 20=1.66%, 50=2.85%, 100=49.15% 00:26:35.883 lat (msec) : 250=24.11%, 500=17.19%, 750=3.39% 00:26:35.883 cpu : usr=0.18%, sys=1.09%, ctx=841, majf=0, minf=3721 00:26:35.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:35.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.883 issued rwts: total=4281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.883 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.883 job1: (groupid=0, jobs=1): err= 0: pid=3050213: Sun Oct 13 19:56:24 2024 00:26:35.883 read: IOPS=132, BW=33.2MiB/s (34.8MB/s)(337MiB/10164msec) 00:26:35.883 slat (usec): min=13, max=411039, avg=7417.81, stdev=30644.88 00:26:35.883 clat (msec): min=127, max=984, avg=474.41, stdev=159.87 00:26:35.883 lat (msec): min=127, max=984, avg=481.83, stdev=161.85 00:26:35.883 clat percentiles (msec): 00:26:35.883 | 1.00th=[ 153], 5.00th=[ 245], 10.00th=[ 284], 20.00th=[ 330], 00:26:35.883 | 30.00th=[ 380], 40.00th=[ 430], 50.00th=[ 451], 60.00th=[ 506], 00:26:35.883 | 70.00th=[ 550], 80.00th=[ 617], 90.00th=[ 684], 95.00th=[ 735], 00:26:35.883 | 99.00th=[ 927], 99.50th=[ 927], 99.90th=[ 986], 99.95th=[ 986], 00:26:35.883 | 99.99th=[ 986] 00:26:35.883 bw ( KiB/s): min=15872, max=51200, per=4.28%, avg=32883.35, stdev=10149.78, samples=20 00:26:35.883 iops : min= 62, max= 200, avg=128.30, stdev=39.69, samples=20 00:26:35.883 lat (msec) : 250=6.08%, 500=53.67%, 750=36.62%, 1000=3.63% 00:26:35.883 cpu : usr=0.08%, sys=0.51%, ctx=142, majf=0, minf=4097 00:26:35.883 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:26:35.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.883 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.883 issued rwts: total=1349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.883 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.883 job2: (groupid=0, jobs=1): err= 0: pid=3050217: Sun Oct 13 19:56:24 2024 00:26:35.883 read: IOPS=373, BW=93.3MiB/s (97.8MB/s)(947MiB/10148msec) 00:26:35.883 slat (usec): min=8, max=434813, avg=2273.62, stdev=12579.73 00:26:35.883 clat (msec): min=27, max=982, avg=169.09, stdev=142.74 00:26:35.883 lat (msec): min=27, max=982, avg=171.37, stdev=144.86 00:26:35.883 clat percentiles (msec): 00:26:35.883 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 54], 00:26:35.883 | 30.00th=[ 91], 40.00th=[ 115], 50.00th=[ 127], 60.00th=[ 140], 00:26:35.883 | 70.00th=[ 159], 80.00th=[ 243], 90.00th=[ 393], 95.00th=[ 477], 00:26:35.883 | 99.00th=[ 642], 99.50th=[ 684], 99.90th=[ 743], 99.95th=[ 986], 00:26:35.883 | 99.99th=[ 986] 00:26:35.883 bw ( KiB/s): min=26112, max=302592, per=12.40%, avg=95302.50, stdev=73308.78, samples=20 00:26:35.883 iops : min= 102, max= 1182, avg=372.15, stdev=286.39, samples=20 00:26:35.883 lat (msec) : 50=17.40%, 100=14.60%, 250=48.80%, 500=15.16%, 750=3.99% 00:26:35.883 lat (msec) : 1000=0.05% 00:26:35.883 cpu : usr=0.09%, sys=1.33%, ctx=421, majf=0, minf=4097 00:26:35.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:35.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.883 issued rwts: total=3787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.883 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.883 job3: (groupid=0, jobs=1): err= 0: pid=3050226: Sun Oct 13 19:56:24 2024 00:26:35.883 read: IOPS=433, BW=108MiB/s (114MB/s)(1100MiB/10153msec) 00:26:35.883 slat (usec): min=9, max=419740, avg=1510.38, stdev=10520.17 00:26:35.883 clat (usec): min=1097, max=1004.0k, avg=146032.40, stdev=146433.97 00:26:35.883 lat (usec): min=1169, max=1004.1k, avg=147542.77, stdev=147782.75 00:26:35.883 clat percentiles (msec): 00:26:35.883 | 1.00th=[ 44], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 55], 00:26:35.883 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 92], 00:26:35.883 | 70.00th=[ 138], 80.00th=[ 224], 90.00th=[ 397], 95.00th=[ 493], 00:26:35.883 | 99.00th=[ 617], 99.50th=[ 667], 99.90th=[ 743], 99.95th=[ 751], 00:26:35.883 | 99.99th=[ 1003] 00:26:35.883 bw ( KiB/s): min=10240, max=294834, per=14.44%, avg=110968.40, stdev=91705.04, samples=20 00:26:35.883 iops : min= 40, max= 1151, avg=433.35, stdev=358.18, samples=20 00:26:35.883 lat (msec) : 2=0.11%, 4=0.39%, 10=0.05%, 20=0.11%, 50=8.23% 00:26:35.883 lat (msec) : 100=53.26%, 250=18.88%, 500=14.77%, 750=4.18%, 2000=0.02% 00:26:35.883 cpu : usr=0.28%, sys=1.30%, ctx=728, majf=0, minf=4097 00:26:35.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:35.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.883 issued rwts: total=4401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.883 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.883 job4: (groupid=0, jobs=1): err= 0: pid=3050231: Sun Oct 13 19:56:24 2024 00:26:35.883 read: IOPS=130, BW=32.5MiB/s (34.1MB/s)(330MiB/10160msec) 00:26:35.883 slat (usec): min=13, max=440125, avg=7573.27, stdev=30002.73 00:26:35.883 clat (msec): min=147, max=1001, avg=484.28, stdev=137.39 00:26:35.883 lat (msec): min=175, max=1002, avg=491.85, stdev=139.76 00:26:35.883 clat percentiles (msec): 00:26:35.883 | 1.00th=[ 176], 5.00th=[ 279], 10.00th=[ 313], 20.00th=[ 359], 00:26:35.883 | 30.00th=[ 405], 40.00th=[ 447], 50.00th=[ 477], 60.00th=[ 510], 00:26:35.883 | 70.00th=[ 542], 80.00th=[ 584], 90.00th=[ 667], 95.00th=[ 735], 00:26:35.883 | 99.00th=[ 802], 99.50th=[ 869], 99.90th=[ 902], 99.95th=[ 1003], 00:26:35.883 | 99.99th=[ 1003] 00:26:35.883 bw ( KiB/s): min=10219, max=54272, per=4.19%, avg=32167.40, stdev=10861.54, samples=20 00:26:35.883 iops : min= 39, max= 212, avg=125.50, stdev=42.47, samples=20 00:26:35.883 lat (msec) : 250=2.88%, 500=52.61%, 750=40.50%, 1000=3.94%, 2000=0.08% 00:26:35.883 cpu : usr=0.12%, sys=0.48%, ctx=135, majf=0, minf=4097 00:26:35.883 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:26:35.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.884 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.884 issued rwts: total=1321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.884 job5: (groupid=0, jobs=1): err= 0: pid=3050253: Sun Oct 13 19:56:24 2024 00:26:35.884 read: IOPS=132, BW=33.1MiB/s (34.7MB/s)(337MiB/10161msec) 00:26:35.884 slat (usec): min=13, max=493143, avg=7478.81, stdev=32243.76 00:26:35.884 clat (msec): min=85, max=1096, avg=475.29, stdev=148.77 00:26:35.884 lat (msec): min=171, max=1096, avg=482.77, stdev=151.43 00:26:35.884 clat percentiles (msec): 00:26:35.884 | 1.00th=[ 171], 5.00th=[ 271], 10.00th=[ 305], 20.00th=[ 342], 00:26:35.884 | 30.00th=[ 393], 40.00th=[ 435], 50.00th=[ 460], 60.00th=[ 502], 00:26:35.884 | 70.00th=[ 531], 80.00th=[ 584], 90.00th=[ 684], 95.00th=[ 760], 00:26:35.884 | 99.00th=[ 869], 99.50th=[ 894], 99.90th=[ 1083], 99.95th=[ 1099], 00:26:35.884 | 99.99th=[ 1099] 00:26:35.884 bw ( KiB/s): min= 8192, max=57856, per=4.49%, avg=34532.16, stdev=11077.96, samples=19 00:26:35.884 iops : min= 32, max= 226, avg=134.74, stdev=43.29, samples=19 00:26:35.884 lat (msec) : 100=0.07%, 250=2.38%, 500=55.87%, 750=34.70%, 1000=6.84% 00:26:35.884 lat (msec) : 2000=0.15% 00:26:35.884 cpu : usr=0.00%, sys=0.60%, ctx=141, majf=0, minf=4097 00:26:35.884 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:26:35.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.884 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.884 issued rwts: total=1346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.884 job6: (groupid=0, jobs=1): err= 0: pid=3050275: Sun Oct 13 19:56:24 2024 00:26:35.884 read: IOPS=281, BW=70.4MiB/s (73.8MB/s)(708MiB/10053msec) 00:26:35.884 slat (usec): min=8, max=256718, avg=2752.01, stdev=13986.07 00:26:35.884 clat (usec): min=1364, max=840217, avg=224288.79, stdev=169471.06 00:26:35.884 lat (usec): min=1397, max=840348, avg=227040.80, stdev=171778.55 00:26:35.884 clat percentiles (msec): 00:26:35.884 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 21], 20.00th=[ 70], 00:26:35.884 | 30.00th=[ 115], 40.00th=[ 153], 50.00th=[ 211], 60.00th=[ 251], 00:26:35.884 | 70.00th=[ 284], 80.00th=[ 359], 90.00th=[ 443], 95.00th=[ 592], 00:26:35.884 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 768], 99.95th=[ 844], 00:26:35.884 | 99.99th=[ 844] 00:26:35.884 bw ( KiB/s): min=25088, max=282570, per=9.22%, avg=70842.40, stdev=58415.45, samples=20 00:26:35.884 iops : min= 98, max= 1103, avg=276.60, stdev=228.08, samples=20 00:26:35.884 lat (msec) : 2=0.39%, 4=6.29%, 10=0.88%, 20=2.30%, 50=6.53% 00:26:35.884 lat (msec) : 100=11.12%, 250=31.71%, 500=33.83%, 750=6.85%, 1000=0.11% 00:26:35.884 cpu : usr=0.17%, sys=0.89%, ctx=685, majf=0, minf=4097 00:26:35.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:35.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.884 issued rwts: total=2832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.884 job7: (groupid=0, jobs=1): err= 0: pid=3050292: Sun Oct 13 19:56:24 2024 00:26:35.884 read: IOPS=275, BW=69.0MiB/s (72.3MB/s)(698MiB/10125msec) 00:26:35.884 slat (usec): min=8, max=123342, avg=3158.24, stdev=12784.56 00:26:35.884 clat (msec): min=8, max=581, avg=228.67, stdev=117.50 00:26:35.884 lat (msec): min=8, max=638, avg=231.83, stdev=119.05 00:26:35.884 clat percentiles (msec): 00:26:35.884 | 1.00th=[ 27], 5.00th=[ 73], 10.00th=[ 87], 20.00th=[ 113], 00:26:35.884 | 30.00th=[ 146], 40.00th=[ 176], 50.00th=[ 220], 60.00th=[ 266], 00:26:35.884 | 70.00th=[ 296], 80.00th=[ 338], 90.00th=[ 384], 95.00th=[ 418], 00:26:35.884 | 99.00th=[ 518], 99.50th=[ 523], 99.90th=[ 584], 99.95th=[ 584], 00:26:35.884 | 99.99th=[ 584] 00:26:35.884 bw ( KiB/s): min=29184, max=192000, per=9.09%, avg=69835.75, stdev=39135.79, samples=20 00:26:35.884 iops : min= 114, max= 750, avg=272.70, stdev=152.92, samples=20 00:26:35.884 lat (msec) : 10=0.14%, 20=0.36%, 50=3.11%, 100=11.39%, 250=40.53% 00:26:35.884 lat (msec) : 500=43.00%, 750=1.47% 00:26:35.884 cpu : usr=0.11%, sys=0.90%, ctx=308, majf=0, minf=4098 00:26:35.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:26:35.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.884 issued rwts: total=2793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.884 job8: (groupid=0, jobs=1): err= 0: pid=3050344: Sun Oct 13 19:56:24 2024 00:26:35.884 read: IOPS=239, BW=60.0MiB/s (62.9MB/s)(607MiB/10124msec) 00:26:35.884 slat (usec): min=10, max=237289, avg=4021.13, stdev=15519.48 00:26:35.884 clat (msec): min=65, max=600, avg=262.47, stdev=110.99 00:26:35.884 lat (msec): min=66, max=676, avg=266.49, stdev=112.76 00:26:35.884 clat percentiles (msec): 00:26:35.884 | 1.00th=[ 73], 5.00th=[ 125], 10.00th=[ 148], 20.00th=[ 165], 00:26:35.884 | 30.00th=[ 184], 40.00th=[ 211], 50.00th=[ 251], 60.00th=[ 279], 00:26:35.884 | 70.00th=[ 309], 80.00th=[ 351], 90.00th=[ 418], 95.00th=[ 493], 00:26:35.884 | 99.00th=[ 558], 99.50th=[ 592], 99.90th=[ 600], 99.95th=[ 600], 00:26:35.884 | 99.99th=[ 600] 00:26:35.884 bw ( KiB/s): min=17920, max=107520, per=7.88%, avg=60547.75, stdev=23445.57, samples=20 00:26:35.884 iops : min= 70, max= 420, avg=236.40, stdev=91.58, samples=20 00:26:35.884 lat (msec) : 100=1.81%, 250=48.17%, 500=45.12%, 750=4.90% 00:26:35.884 cpu : usr=0.16%, sys=0.87%, ctx=255, majf=0, minf=4097 00:26:35.884 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:35.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.884 issued rwts: total=2429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.884 job9: (groupid=0, jobs=1): err= 0: pid=3050363: Sun Oct 13 19:56:24 2024 00:26:35.884 read: IOPS=347, BW=87.0MiB/s (91.2MB/s)(883MiB/10149msec) 00:26:35.884 slat (usec): min=12, max=96372, avg=2509.76, stdev=8654.83 00:26:35.884 clat (msec): min=9, max=463, avg=181.34, stdev=76.40 00:26:35.884 lat (msec): min=10, max=463, avg=183.85, stdev=77.58 00:26:35.884 clat percentiles (msec): 00:26:35.884 | 1.00th=[ 39], 5.00th=[ 93], 10.00th=[ 107], 20.00th=[ 126], 00:26:35.884 | 30.00th=[ 136], 40.00th=[ 146], 50.00th=[ 155], 60.00th=[ 171], 00:26:35.884 | 70.00th=[ 211], 80.00th=[ 249], 90.00th=[ 292], 95.00th=[ 338], 00:26:35.884 | 99.00th=[ 397], 99.50th=[ 414], 99.90th=[ 435], 99.95th=[ 435], 00:26:35.884 | 99.99th=[ 464] 00:26:35.884 bw ( KiB/s): min=44032, max=135168, per=11.55%, avg=88727.55, stdev=29617.96, samples=20 00:26:35.884 iops : min= 172, max= 528, avg=346.45, stdev=115.57, samples=20 00:26:35.884 lat (msec) : 10=0.03%, 20=0.06%, 50=1.30%, 100=6.09%, 250=72.86% 00:26:35.884 lat (msec) : 500=19.66% 00:26:35.884 cpu : usr=0.12%, sys=1.28%, ctx=480, majf=0, minf=4097 00:26:35.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:35.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.884 issued rwts: total=3530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.884 job10: (groupid=0, jobs=1): err= 0: pid=3050367: Sun Oct 13 19:56:24 2024 00:26:35.884 read: IOPS=249, BW=62.4MiB/s (65.5MB/s)(637MiB/10201msec) 00:26:35.884 slat (usec): min=8, max=341631, avg=2722.89, stdev=15625.26 00:26:35.884 clat (usec): min=876, max=969254, avg=253278.69, stdev=165266.11 00:26:35.884 lat (usec): min=930, max=1078.3k, avg=256001.58, stdev=167158.02 00:26:35.884 clat percentiles (msec): 00:26:35.884 | 1.00th=[ 13], 5.00th=[ 55], 10.00th=[ 75], 20.00th=[ 122], 00:26:35.884 | 30.00th=[ 140], 40.00th=[ 169], 50.00th=[ 213], 60.00th=[ 255], 00:26:35.884 | 70.00th=[ 313], 80.00th=[ 397], 90.00th=[ 485], 95.00th=[ 575], 00:26:35.884 | 99.00th=[ 701], 99.50th=[ 936], 99.90th=[ 936], 99.95th=[ 969], 00:26:35.884 | 99.99th=[ 969] 00:26:35.884 bw ( KiB/s): min=14848, max=139776, per=8.27%, avg=63578.60, stdev=32359.79, samples=20 00:26:35.884 iops : min= 58, max= 546, avg=248.25, stdev=126.48, samples=20 00:26:35.884 lat (usec) : 1000=0.04% 00:26:35.884 lat (msec) : 2=0.24%, 4=0.24%, 10=0.31%, 20=0.27%, 50=3.10% 00:26:35.884 lat (msec) : 100=8.95%, 250=44.19%, 500=33.40%, 750=8.56%, 1000=0.71% 00:26:35.884 cpu : usr=0.10%, sys=0.94%, ctx=410, majf=0, minf=4097 00:26:35.884 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:35.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:35.884 issued rwts: total=2548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:35.884 00:26:35.884 Run status group 0 (all jobs): 00:26:35.884 READ: bw=750MiB/s (787MB/s), 32.5MiB/s-108MiB/s (34.1MB/s-114MB/s), io=7654MiB (8026MB), run=10053-10201msec 00:26:35.884 00:26:35.884 Disk stats (read/write): 00:26:35.884 nvme0n1: ios=8417/0, merge=0/0, ticks=1228848/0, in_queue=1228848, util=97.07% 00:26:35.884 nvme10n1: ios=2553/0, merge=0/0, ticks=1204650/0, in_queue=1204650, util=97.28% 00:26:35.884 nvme1n1: ios=7409/0, merge=0/0, ticks=1230125/0, in_queue=1230125, util=97.52% 00:26:35.884 nvme2n1: ios=8647/0, merge=0/0, ticks=1237955/0, in_queue=1237955, util=97.69% 00:26:35.884 nvme3n1: ios=2474/0, merge=0/0, ticks=1222787/0, in_queue=1222787, util=97.75% 00:26:35.884 nvme4n1: ios=2519/0, merge=0/0, ticks=1219847/0, in_queue=1219847, util=98.12% 00:26:35.884 nvme5n1: ios=5408/0, merge=0/0, ticks=1242514/0, in_queue=1242514, util=98.31% 00:26:35.884 nvme6n1: ios=5412/0, merge=0/0, ticks=1220779/0, in_queue=1220779, util=98.43% 00:26:35.884 nvme7n1: ios=4700/0, merge=0/0, ticks=1226319/0, in_queue=1226319, util=98.86% 00:26:35.884 nvme8n1: ios=6892/0, merge=0/0, ticks=1228396/0, in_queue=1228396, util=99.08% 00:26:35.884 nvme9n1: ios=5095/0, merge=0/0, ticks=1276237/0, in_queue=1276237, util=99.22% 00:26:35.884 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:35.884 [global] 00:26:35.884 thread=1 00:26:35.884 invalidate=1 00:26:35.884 rw=randwrite 00:26:35.884 time_based=1 00:26:35.884 runtime=10 00:26:35.884 ioengine=libaio 00:26:35.884 direct=1 00:26:35.884 bs=262144 00:26:35.884 iodepth=64 00:26:35.884 norandommap=1 00:26:35.884 numjobs=1 00:26:35.884 00:26:35.884 [job0] 00:26:35.884 filename=/dev/nvme0n1 00:26:35.884 [job1] 00:26:35.884 filename=/dev/nvme10n1 00:26:35.884 [job2] 00:26:35.884 filename=/dev/nvme1n1 00:26:35.884 [job3] 00:26:35.884 filename=/dev/nvme2n1 00:26:35.885 [job4] 00:26:35.885 filename=/dev/nvme3n1 00:26:35.885 [job5] 00:26:35.885 filename=/dev/nvme4n1 00:26:35.885 [job6] 00:26:35.885 filename=/dev/nvme5n1 00:26:35.885 [job7] 00:26:35.885 filename=/dev/nvme6n1 00:26:35.885 [job8] 00:26:35.885 filename=/dev/nvme7n1 00:26:35.885 [job9] 00:26:35.885 filename=/dev/nvme8n1 00:26:35.885 [job10] 00:26:35.885 filename=/dev/nvme9n1 00:26:35.885 Could not set queue depth (nvme0n1) 00:26:35.885 Could not set queue depth (nvme10n1) 00:26:35.885 Could not set queue depth (nvme1n1) 00:26:35.885 Could not set queue depth (nvme2n1) 00:26:35.885 Could not set queue depth (nvme3n1) 00:26:35.885 Could not set queue depth (nvme4n1) 00:26:35.885 Could not set queue depth (nvme5n1) 00:26:35.885 Could not set queue depth (nvme6n1) 00:26:35.885 Could not set queue depth (nvme7n1) 00:26:35.885 Could not set queue depth (nvme8n1) 00:26:35.885 Could not set queue depth (nvme9n1) 00:26:35.885 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.885 fio-3.35 00:26:35.885 Starting 11 threads 00:26:45.852 00:26:45.852 job0: (groupid=0, jobs=1): err= 0: pid=3051097: Sun Oct 13 19:56:34 2024 00:26:45.852 write: IOPS=261, BW=65.5MiB/s (68.7MB/s)(668MiB/10202msec); 0 zone resets 00:26:45.852 slat (usec): min=15, max=81459, avg=2510.45, stdev=7232.39 00:26:45.852 clat (usec): min=1380, max=542640, avg=241692.12, stdev=127874.92 00:26:45.852 lat (msec): min=2, max=542, avg=244.20, stdev=129.53 00:26:45.852 clat percentiles (msec): 00:26:45.852 | 1.00th=[ 7], 5.00th=[ 41], 10.00th=[ 63], 20.00th=[ 121], 00:26:45.852 | 30.00th=[ 161], 40.00th=[ 213], 50.00th=[ 247], 60.00th=[ 271], 00:26:45.852 | 70.00th=[ 317], 80.00th=[ 359], 90.00th=[ 426], 95.00th=[ 460], 00:26:45.852 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 523], 99.95th=[ 542], 00:26:45.852 | 99.99th=[ 542] 00:26:45.852 bw ( KiB/s): min=36864, max=117248, per=7.62%, avg=66761.25, stdev=26004.56, samples=20 00:26:45.852 iops : min= 144, max= 458, avg=260.65, stdev=101.55, samples=20 00:26:45.852 lat (msec) : 2=0.04%, 4=0.11%, 10=1.20%, 20=1.50%, 50=3.44% 00:26:45.852 lat (msec) : 100=11.38%, 250=34.24%, 500=47.53%, 750=0.56% 00:26:45.852 cpu : usr=0.80%, sys=0.91%, ctx=1597, majf=0, minf=2 00:26:45.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:45.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.852 issued rwts: total=0,2672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.852 job1: (groupid=0, jobs=1): err= 0: pid=3051109: Sun Oct 13 19:56:34 2024 00:26:45.852 write: IOPS=298, BW=74.6MiB/s (78.2MB/s)(758MiB/10154msec); 0 zone resets 00:26:45.852 slat (usec): min=21, max=129096, avg=2603.76, stdev=7206.89 00:26:45.852 clat (usec): min=1091, max=570242, avg=211700.50, stdev=120764.56 00:26:45.852 lat (usec): min=1132, max=570299, avg=214304.26, stdev=122441.07 00:26:45.852 clat percentiles (msec): 00:26:45.852 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 35], 20.00th=[ 96], 00:26:45.852 | 30.00th=[ 169], 40.00th=[ 184], 50.00th=[ 207], 60.00th=[ 222], 00:26:45.852 | 70.00th=[ 253], 80.00th=[ 326], 90.00th=[ 380], 95.00th=[ 414], 00:26:45.852 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:26:45.852 | 99.99th=[ 567] 00:26:45.852 bw ( KiB/s): min=28614, max=216143, per=8.67%, avg=75928.35, stdev=39026.78, samples=20 00:26:45.852 iops : min= 111, max= 844, avg=296.50, stdev=152.46, samples=20 00:26:45.852 lat (msec) : 2=0.26%, 4=0.59%, 10=2.08%, 20=3.04%, 50=6.67% 00:26:45.852 lat (msec) : 100=7.95%, 250=48.81%, 500=28.91%, 750=1.68% 00:26:45.852 cpu : usr=0.86%, sys=0.97%, ctx=1589, majf=0, minf=1 00:26:45.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:45.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.852 issued rwts: total=0,3030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.852 job2: (groupid=0, jobs=1): err= 0: pid=3051110: Sun Oct 13 19:56:34 2024 00:26:45.852 write: IOPS=315, BW=78.9MiB/s (82.7MB/s)(800MiB/10140msec); 0 zone resets 00:26:45.852 slat (usec): min=17, max=117960, avg=1813.77, stdev=5984.25 00:26:45.852 clat (usec): min=1067, max=585668, avg=200887.03, stdev=109752.11 00:26:45.852 lat (usec): min=1160, max=594313, avg=202700.80, stdev=110999.65 00:26:45.852 clat percentiles (msec): 00:26:45.852 | 1.00th=[ 5], 5.00th=[ 34], 10.00th=[ 65], 20.00th=[ 118], 00:26:45.852 | 30.00th=[ 133], 40.00th=[ 150], 50.00th=[ 182], 60.00th=[ 207], 00:26:45.852 | 70.00th=[ 249], 80.00th=[ 309], 90.00th=[ 351], 95.00th=[ 384], 00:26:45.852 | 99.00th=[ 493], 99.50th=[ 558], 99.90th=[ 575], 99.95th=[ 575], 00:26:45.852 | 99.99th=[ 584] 00:26:45.852 bw ( KiB/s): min=36864, max=129536, per=9.17%, avg=80289.40, stdev=25673.27, samples=20 00:26:45.852 iops : min= 144, max= 506, avg=313.55, stdev=100.36, samples=20 00:26:45.852 lat (msec) : 2=0.19%, 4=0.56%, 10=0.56%, 20=1.12%, 50=5.06% 00:26:45.852 lat (msec) : 100=4.81%, 250=58.00%, 500=28.72%, 750=0.97% 00:26:45.852 cpu : usr=0.85%, sys=1.26%, ctx=1985, majf=0, minf=1 00:26:45.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:45.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.852 issued rwts: total=0,3200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.852 job3: (groupid=0, jobs=1): err= 0: pid=3051111: Sun Oct 13 19:56:34 2024 00:26:45.852 write: IOPS=431, BW=108MiB/s (113MB/s)(1102MiB/10224msec); 0 zone resets 00:26:45.852 slat (usec): min=16, max=131407, avg=1430.93, stdev=4740.27 00:26:45.852 clat (usec): min=927, max=489628, avg=146928.56, stdev=110522.17 00:26:45.852 lat (usec): min=957, max=489675, avg=148359.48, stdev=111509.42 00:26:45.852 clat percentiles (msec): 00:26:45.852 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 27], 20.00th=[ 51], 00:26:45.852 | 30.00th=[ 64], 40.00th=[ 105], 50.00th=[ 130], 60.00th=[ 146], 00:26:45.852 | 70.00th=[ 180], 80.00th=[ 236], 90.00th=[ 309], 95.00th=[ 388], 00:26:45.852 | 99.00th=[ 451], 99.50th=[ 464], 99.90th=[ 481], 99.95th=[ 489], 00:26:45.852 | 99.99th=[ 489] 00:26:45.852 bw ( KiB/s): min=38912, max=283136, per=12.70%, avg=111181.55, stdev=62595.33, samples=20 00:26:45.852 iops : min= 152, max= 1106, avg=434.25, stdev=244.50, samples=20 00:26:45.852 lat (usec) : 1000=0.07% 00:26:45.852 lat (msec) : 2=0.27%, 4=0.75%, 10=3.33%, 20=2.86%, 50=12.95% 00:26:45.852 lat (msec) : 100=18.67%, 250=43.33%, 500=17.76% 00:26:45.852 cpu : usr=1.23%, sys=1.32%, ctx=2636, majf=0, minf=1 00:26:45.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:45.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.852 issued rwts: total=0,4408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.852 job4: (groupid=0, jobs=1): err= 0: pid=3051112: Sun Oct 13 19:56:34 2024 00:26:45.852 write: IOPS=249, BW=62.3MiB/s (65.3MB/s)(637MiB/10222msec); 0 zone resets 00:26:45.852 slat (usec): min=17, max=159824, avg=2975.32, stdev=8843.64 00:26:45.852 clat (usec): min=1066, max=576967, avg=253749.87, stdev=131797.24 00:26:45.852 lat (usec): min=1100, max=601259, avg=256725.19, stdev=133464.07 00:26:45.852 clat percentiles (msec): 00:26:45.852 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 39], 20.00th=[ 146], 00:26:45.852 | 30.00th=[ 201], 40.00th=[ 224], 50.00th=[ 253], 60.00th=[ 292], 00:26:45.852 | 70.00th=[ 321], 80.00th=[ 388], 90.00th=[ 430], 95.00th=[ 456], 00:26:45.852 | 99.00th=[ 485], 99.50th=[ 489], 99.90th=[ 506], 99.95th=[ 518], 00:26:45.852 | 99.99th=[ 575] 00:26:45.852 bw ( KiB/s): min=36864, max=173568, per=7.26%, avg=63554.75, stdev=29901.01, samples=20 00:26:45.852 iops : min= 144, max= 678, avg=248.15, stdev=116.81, samples=20 00:26:45.852 lat (msec) : 2=0.59%, 4=1.65%, 10=3.65%, 20=1.34%, 50=3.85% 00:26:45.852 lat (msec) : 100=4.63%, 250=32.80%, 500=51.22%, 750=0.27% 00:26:45.852 cpu : usr=0.75%, sys=0.88%, ctx=1269, majf=0, minf=1 00:26:45.852 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:45.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.852 issued rwts: total=0,2546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.852 job5: (groupid=0, jobs=1): err= 0: pid=3051113: Sun Oct 13 19:56:34 2024 00:26:45.852 write: IOPS=278, BW=69.7MiB/s (73.1MB/s)(713MiB/10223msec); 0 zone resets 00:26:45.852 slat (usec): min=16, max=37765, avg=2227.08, stdev=6037.60 00:26:45.852 clat (usec): min=1568, max=588819, avg=227231.13, stdev=108924.35 00:26:45.852 lat (usec): min=1801, max=588860, avg=229458.21, stdev=110031.06 00:26:45.852 clat percentiles (msec): 00:26:45.852 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 113], 20.00th=[ 142], 00:26:45.852 | 30.00th=[ 176], 40.00th=[ 199], 50.00th=[ 218], 60.00th=[ 236], 00:26:45.852 | 70.00th=[ 268], 80.00th=[ 309], 90.00th=[ 363], 95.00th=[ 451], 00:26:45.852 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 584], 99.95th=[ 584], 00:26:45.852 | 99.99th=[ 592] 00:26:45.852 bw ( KiB/s): min=35840, max=124167, per=8.15%, avg=71330.05, stdev=24230.37, samples=20 00:26:45.852 iops : min= 140, max= 485, avg=278.55, stdev=94.66, samples=20 00:26:45.852 lat (msec) : 2=0.07%, 4=0.28%, 10=1.05%, 20=2.14%, 50=3.12% 00:26:45.852 lat (msec) : 100=1.93%, 250=55.02%, 500=35.09%, 750=1.30% 00:26:45.852 cpu : usr=0.93%, sys=0.86%, ctx=1603, majf=0, minf=1 00:26:45.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:45.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.852 issued rwts: total=0,2850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.852 job6: (groupid=0, jobs=1): err= 0: pid=3051114: Sun Oct 13 19:56:34 2024 00:26:45.852 write: IOPS=381, BW=95.3MiB/s (100.0MB/s)(971MiB/10186msec); 0 zone resets 00:26:45.852 slat (usec): min=23, max=93146, avg=1981.48, stdev=5449.34 00:26:45.852 clat (usec): min=1363, max=564228, avg=165732.87, stdev=107810.57 00:26:45.852 lat (msec): min=2, max=564, avg=167.71, stdev=108.93 00:26:45.852 clat percentiles (msec): 00:26:45.852 | 1.00th=[ 41], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 58], 00:26:45.852 | 30.00th=[ 77], 40.00th=[ 113], 50.00th=[ 144], 60.00th=[ 171], 00:26:45.852 | 70.00th=[ 199], 80.00th=[ 271], 90.00th=[ 334], 95.00th=[ 368], 00:26:45.852 | 99.00th=[ 443], 99.50th=[ 481], 99.90th=[ 550], 99.95th=[ 558], 00:26:45.852 | 99.99th=[ 567] 00:26:45.852 bw ( KiB/s): min=44032, max=282624, per=11.17%, avg=97803.10, stdev=61748.09, samples=20 00:26:45.852 iops : min= 172, max= 1104, avg=381.95, stdev=241.24, samples=20 00:26:45.852 lat (msec) : 2=0.03%, 4=0.13%, 10=0.10%, 50=1.44%, 100=31.33% 00:26:45.852 lat (msec) : 250=43.43%, 500=23.15%, 750=0.39% 00:26:45.852 cpu : usr=1.31%, sys=1.22%, ctx=1552, majf=0, minf=1 00:26:45.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:45.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.852 issued rwts: total=0,3884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.852 job7: (groupid=0, jobs=1): err= 0: pid=3051115: Sun Oct 13 19:56:34 2024 00:26:45.853 write: IOPS=290, BW=72.5MiB/s (76.1MB/s)(737MiB/10154msec); 0 zone resets 00:26:45.853 slat (usec): min=23, max=157011, avg=2924.38, stdev=7431.52 00:26:45.853 clat (msec): min=7, max=525, avg=217.50, stdev=111.86 00:26:45.853 lat (msec): min=7, max=526, avg=220.42, stdev=113.31 00:26:45.853 clat percentiles (msec): 00:26:45.853 | 1.00th=[ 31], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 80], 00:26:45.853 | 30.00th=[ 169], 40.00th=[ 182], 50.00th=[ 211], 60.00th=[ 245], 00:26:45.853 | 70.00th=[ 271], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 422], 00:26:45.853 | 99.00th=[ 485], 99.50th=[ 498], 99.90th=[ 518], 99.95th=[ 518], 00:26:45.853 | 99.99th=[ 527] 00:26:45.853 bw ( KiB/s): min=34816, max=178688, per=8.43%, avg=73783.10, stdev=36157.02, samples=20 00:26:45.853 iops : min= 136, max= 698, avg=288.10, stdev=141.17, samples=20 00:26:45.853 lat (msec) : 10=0.14%, 20=0.27%, 50=2.04%, 100=19.21%, 250=40.46% 00:26:45.853 lat (msec) : 500=37.54%, 750=0.34% 00:26:45.853 cpu : usr=0.80%, sys=0.86%, ctx=1059, majf=0, minf=1 00:26:45.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:45.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.853 issued rwts: total=0,2946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.853 job8: (groupid=0, jobs=1): err= 0: pid=3051116: Sun Oct 13 19:56:34 2024 00:26:45.853 write: IOPS=278, BW=69.5MiB/s (72.9MB/s)(711MiB/10223msec); 0 zone resets 00:26:45.853 slat (usec): min=23, max=146605, avg=2842.89, stdev=7838.45 00:26:45.853 clat (usec): min=1687, max=572872, avg=227093.79, stdev=138019.17 00:26:45.853 lat (msec): min=2, max=572, avg=229.94, stdev=139.77 00:26:45.853 clat percentiles (msec): 00:26:45.853 | 1.00th=[ 17], 5.00th=[ 79], 10.00th=[ 101], 20.00th=[ 106], 00:26:45.853 | 30.00th=[ 115], 40.00th=[ 157], 50.00th=[ 182], 60.00th=[ 226], 00:26:45.853 | 70.00th=[ 275], 80.00th=[ 363], 90.00th=[ 460], 95.00th=[ 502], 00:26:45.853 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 567], 99.95th=[ 567], 00:26:45.853 | 99.99th=[ 575] 00:26:45.853 bw ( KiB/s): min=28672, max=141312, per=8.12%, avg=71149.90, stdev=36532.60, samples=20 00:26:45.853 iops : min= 112, max= 552, avg=277.80, stdev=142.63, samples=20 00:26:45.853 lat (msec) : 2=0.04%, 4=0.07%, 10=0.39%, 20=0.70%, 50=2.60% 00:26:45.853 lat (msec) : 100=6.37%, 250=53.89%, 500=30.64%, 750=5.31% 00:26:45.853 cpu : usr=1.00%, sys=0.79%, ctx=1165, majf=0, minf=1 00:26:45.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:45.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.853 issued rwts: total=0,2843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.853 job9: (groupid=0, jobs=1): err= 0: pid=3051117: Sun Oct 13 19:56:34 2024 00:26:45.853 write: IOPS=296, BW=74.2MiB/s (77.8MB/s)(757MiB/10208msec); 0 zone resets 00:26:45.853 slat (usec): min=20, max=129784, avg=2150.51, stdev=7087.05 00:26:45.853 clat (usec): min=1817, max=550722, avg=213420.79, stdev=130777.53 00:26:45.853 lat (usec): min=1853, max=550807, avg=215571.30, stdev=132617.41 00:26:45.853 clat percentiles (msec): 00:26:45.853 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 44], 20.00th=[ 77], 00:26:45.853 | 30.00th=[ 117], 40.00th=[ 178], 50.00th=[ 213], 60.00th=[ 245], 00:26:45.853 | 70.00th=[ 275], 80.00th=[ 347], 90.00th=[ 397], 95.00th=[ 439], 00:26:45.853 | 99.00th=[ 493], 99.50th=[ 506], 99.90th=[ 531], 99.95th=[ 550], 00:26:45.853 | 99.99th=[ 550] 00:26:45.853 bw ( KiB/s): min=34746, max=184320, per=8.67%, avg=75896.55, stdev=36591.89, samples=20 00:26:45.853 iops : min= 135, max= 720, avg=296.35, stdev=143.06, samples=20 00:26:45.853 lat (msec) : 2=0.03%, 4=0.20%, 10=2.28%, 20=0.76%, 50=8.39% 00:26:45.853 lat (msec) : 100=14.30%, 250=36.02%, 500=37.41%, 750=0.63% 00:26:45.853 cpu : usr=0.88%, sys=1.02%, ctx=1891, majf=0, minf=1 00:26:45.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:45.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.853 issued rwts: total=0,3029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.853 job10: (groupid=0, jobs=1): err= 0: pid=3051118: Sun Oct 13 19:56:34 2024 00:26:45.853 write: IOPS=350, BW=87.6MiB/s (91.8MB/s)(892MiB/10178msec); 0 zone resets 00:26:45.853 slat (usec): min=18, max=65463, avg=1802.87, stdev=5422.87 00:26:45.853 clat (usec): min=1135, max=529101, avg=180227.37, stdev=120227.76 00:26:45.853 lat (usec): min=1188, max=534959, avg=182030.24, stdev=121580.56 00:26:45.853 clat percentiles (msec): 00:26:45.853 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 37], 20.00th=[ 62], 00:26:45.853 | 30.00th=[ 87], 40.00th=[ 136], 50.00th=[ 176], 60.00th=[ 203], 00:26:45.853 | 70.00th=[ 243], 80.00th=[ 275], 90.00th=[ 355], 95.00th=[ 418], 00:26:45.853 | 99.00th=[ 489], 99.50th=[ 506], 99.90th=[ 514], 99.95th=[ 527], 00:26:45.853 | 99.99th=[ 531] 00:26:45.853 bw ( KiB/s): min=47104, max=198770, per=10.24%, avg=89641.00, stdev=40851.02, samples=20 00:26:45.853 iops : min= 184, max= 776, avg=350.10, stdev=159.54, samples=20 00:26:45.853 lat (msec) : 2=0.17%, 4=0.67%, 10=2.10%, 20=2.86%, 50=9.03% 00:26:45.853 lat (msec) : 100=17.86%, 250=40.21%, 500=26.50%, 750=0.59% 00:26:45.853 cpu : usr=1.05%, sys=1.27%, ctx=2140, majf=0, minf=1 00:26:45.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:45.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:45.853 issued rwts: total=0,3566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:45.853 00:26:45.853 Run status group 0 (all jobs): 00:26:45.853 WRITE: bw=855MiB/s (897MB/s), 62.3MiB/s-108MiB/s (65.3MB/s-113MB/s), io=8744MiB (9168MB), run=10140-10224msec 00:26:45.853 00:26:45.853 Disk stats (read/write): 00:26:45.853 nvme0n1: ios=47/5307, merge=0/0, ticks=2386/1246879, in_queue=1249265, util=99.94% 00:26:45.853 nvme10n1: ios=45/5899, merge=0/0, ticks=4003/1209126, in_queue=1213129, util=100.00% 00:26:45.853 nvme1n1: ios=44/6229, merge=0/0, ticks=1599/1222320, in_queue=1223919, util=100.00% 00:26:45.853 nvme2n1: ios=0/8781, merge=0/0, ticks=0/1249119, in_queue=1249119, util=97.87% 00:26:45.853 nvme3n1: ios=47/5061, merge=0/0, ticks=3896/1219990, in_queue=1223886, util=100.00% 00:26:45.853 nvme4n1: ios=42/5667, merge=0/0, ticks=46/1249198, in_queue=1249244, util=98.44% 00:26:45.853 nvme5n1: ios=47/7765, merge=0/0, ticks=923/1240359, in_queue=1241282, util=100.00% 00:26:45.853 nvme6n1: ios=45/5730, merge=0/0, ticks=1509/1187365, in_queue=1188874, util=100.00% 00:26:45.853 nvme7n1: ios=40/5653, merge=0/0, ticks=851/1240339, in_queue=1241190, util=100.00% 00:26:45.853 nvme8n1: ios=0/6026, merge=0/0, ticks=0/1247853, in_queue=1247853, util=99.01% 00:26:45.853 nvme9n1: ios=42/6966, merge=0/0, ticks=3516/1205578, in_queue=1209094, util=100.00% 00:26:45.853 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:45.853 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:45.853 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.853 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:45.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.853 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:46.111 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.111 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:46.369 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:46.369 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:46.369 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:46.369 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:46.369 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.627 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:46.885 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.885 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:47.451 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:47.451 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:47.451 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:47.451 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:47.451 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.451 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:47.709 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.709 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:47.967 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.967 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:48.225 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.225 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.225 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:48.483 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.483 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:48.741 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.741 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:48.999 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.999 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.000 rmmod nvme_tcp 00:26:49.000 rmmod nvme_fabrics 00:26:49.000 rmmod nvme_keyring 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 3045831 ']' 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 3045831 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3045831 ']' 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3045831 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3045831 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3045831' 00:26:49.000 killing process with pid 3045831 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3045831 00:26:49.000 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3045831 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.284 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.192 00:26:54.192 real 1m5.857s 00:26:54.192 user 3m53.428s 00:26:54.192 sys 0m15.774s 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.192 ************************************ 00:26:54.192 END TEST nvmf_multiconnection 00:26:54.192 ************************************ 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:54.192 ************************************ 00:26:54.192 START TEST nvmf_initiator_timeout 00:26:54.192 ************************************ 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:54.192 * Looking for test storage... 00:26:54.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:54.192 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:54.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.193 --rc genhtml_branch_coverage=1 00:26:54.193 --rc genhtml_function_coverage=1 00:26:54.193 --rc genhtml_legend=1 00:26:54.193 --rc geninfo_all_blocks=1 00:26:54.193 --rc geninfo_unexecuted_blocks=1 00:26:54.193 00:26:54.193 ' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:54.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.193 --rc genhtml_branch_coverage=1 00:26:54.193 --rc genhtml_function_coverage=1 00:26:54.193 --rc genhtml_legend=1 00:26:54.193 --rc geninfo_all_blocks=1 00:26:54.193 --rc geninfo_unexecuted_blocks=1 00:26:54.193 00:26:54.193 ' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:54.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.193 --rc genhtml_branch_coverage=1 00:26:54.193 --rc genhtml_function_coverage=1 00:26:54.193 --rc genhtml_legend=1 00:26:54.193 --rc geninfo_all_blocks=1 00:26:54.193 --rc geninfo_unexecuted_blocks=1 00:26:54.193 00:26:54.193 ' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:54.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.193 --rc genhtml_branch_coverage=1 00:26:54.193 --rc genhtml_function_coverage=1 00:26:54.193 --rc genhtml_legend=1 00:26:54.193 --rc geninfo_all_blocks=1 00:26:54.193 --rc geninfo_unexecuted_blocks=1 00:26:54.193 00:26:54.193 ' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:54.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:54.193 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:56.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:56.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:56.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.163 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:56.164 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.164 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:56.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:26:56.424 00:26:56.424 --- 10.0.0.2 ping statistics --- 00:26:56.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.424 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:26:56.424 00:26:56.424 --- 10.0.0.1 ping statistics --- 00:26:56.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.424 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=3054735 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 3054735 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3054735 ']' 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.424 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.424 [2024-10-13 19:56:46.198151] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:26:56.424 [2024-10-13 19:56:46.198311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.683 [2024-10-13 19:56:46.341299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:56.683 [2024-10-13 19:56:46.486259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.683 [2024-10-13 19:56:46.486354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.683 [2024-10-13 19:56:46.486382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.683 [2024-10-13 19:56:46.486422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.683 [2024-10-13 19:56:46.486444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.683 [2024-10-13 19:56:46.489316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.683 [2024-10-13 19:56:46.489376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.684 [2024-10-13 19:56:46.489436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.684 [2024-10-13 19:56:46.489442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.618 Malloc0 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.618 Delay0 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:57.618 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.619 [2024-10-13 19:56:47.256818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.619 [2024-10-13 19:56:47.286516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.619 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:58.185 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:58.185 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:58.185 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:58.185 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:58.185 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:00.711 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:00.711 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:00.711 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:00.711 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:00.711 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:00.711 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:00.711 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3055171 00:27:00.712 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:00.712 19:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:00.712 [global] 00:27:00.712 thread=1 00:27:00.712 invalidate=1 00:27:00.712 rw=write 00:27:00.712 time_based=1 00:27:00.712 runtime=60 00:27:00.712 ioengine=libaio 00:27:00.712 direct=1 00:27:00.712 bs=4096 00:27:00.712 iodepth=1 00:27:00.712 norandommap=0 00:27:00.712 numjobs=1 00:27:00.712 00:27:00.712 verify_dump=1 00:27:00.712 verify_backlog=512 00:27:00.712 verify_state_save=0 00:27:00.712 do_verify=1 00:27:00.712 verify=crc32c-intel 00:27:00.712 [job0] 00:27:00.712 filename=/dev/nvme0n1 00:27:00.712 Could not set queue depth (nvme0n1) 00:27:00.712 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:00.712 fio-3.35 00:27:00.712 Starting 1 thread 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.242 true 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.242 true 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.242 true 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.242 true 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.242 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.522 true 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.522 true 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.522 true 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.522 true 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:06.522 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3055171 00:28:02.734 00:28:02.734 job0: (groupid=0, jobs=1): err= 0: pid=3055361: Sun Oct 13 19:57:50 2024 00:28:02.734 read: IOPS=92, BW=371KiB/s (380kB/s)(21.8MiB/60039msec) 00:28:02.734 slat (nsec): min=4080, max=78853, avg=18360.15, stdev=11409.14 00:28:02.734 clat (usec): min=262, max=41427, avg=3066.12, stdev=10153.29 00:28:02.734 lat (usec): min=271, max=41445, avg=3084.48, stdev=10154.66 00:28:02.734 clat percentiles (usec): 00:28:02.734 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 314], 00:28:02.734 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 363], 00:28:02.734 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 424], 95.00th=[41157], 00:28:02.734 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:28:02.734 | 99.99th=[41681] 00:28:02.734 write: IOPS=93, BW=375KiB/s (384kB/s)(22.0MiB/60039msec); 0 zone resets 00:28:02.734 slat (usec): min=5, max=17716, avg=22.02, stdev=282.13 00:28:02.734 clat (usec): min=202, max=41223k, avg=7576.58, stdev=549291.21 00:28:02.734 lat (usec): min=211, max=41223k, avg=7598.59, stdev=549291.23 00:28:02.734 clat percentiles (usec): 00:28:02.734 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 00:28:02.734 | 20.00th=[ 231], 30.00th=[ 235], 40.00th=[ 241], 00:28:02.734 | 50.00th=[ 245], 60.00th=[ 255], 70.00th=[ 269], 00:28:02.734 | 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 326], 00:28:02.734 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 494], 00:28:02.734 | 99.95th=[ 537], 99.99th=[17112761] 00:28:02.734 bw ( KiB/s): min= 760, max= 8192, per=100.00%, avg=5006.22, stdev=2647.30, samples=9 00:28:02.734 iops : min= 190, max= 2048, avg=1251.56, stdev=661.83, samples=9 00:28:02.734 lat (usec) : 250=28.06%, 500=68.49%, 750=0.11% 00:28:02.734 lat (msec) : 50=3.33%, >=2000=0.01% 00:28:02.734 cpu : usr=0.19%, sys=0.34%, ctx=11206, majf=0, minf=1 00:28:02.734 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.734 issued rwts: total=5572,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.734 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:02.734 00:28:02.734 Run status group 0 (all jobs): 00:28:02.734 READ: bw=371KiB/s (380kB/s), 371KiB/s-371KiB/s (380kB/s-380kB/s), io=21.8MiB (22.8MB), run=60039-60039msec 00:28:02.734 WRITE: bw=375KiB/s (384kB/s), 375KiB/s-375KiB/s (384kB/s-384kB/s), io=22.0MiB (23.1MB), run=60039-60039msec 00:28:02.734 00:28:02.734 Disk stats (read/write): 00:28:02.734 nvme0n1: ios=5667/5632, merge=0/0, ticks=16935/1396, in_queue=18331, util=99.85% 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:02.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:02.734 nvmf hotplug test: fio successful as expected 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.734 rmmod nvme_tcp 00:28:02.734 rmmod nvme_fabrics 00:28:02.734 rmmod nvme_keyring 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 3054735 ']' 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 3054735 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3054735 ']' 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3054735 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3054735 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3054735' 00:28:02.734 killing process with pid 3054735 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3054735 00:28:02.734 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3054735 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.734 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.637 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.637 00:28:04.637 real 1m10.149s 00:28:04.637 user 4m15.605s 00:28:04.637 sys 0m7.271s 00:28:04.637 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.637 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:04.637 ************************************ 00:28:04.637 END TEST nvmf_initiator_timeout 00:28:04.637 ************************************ 00:28:04.637 19:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:04.637 19:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:04.637 19:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:04.637 19:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.637 19:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:06.539 19:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:06.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:06.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:06.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:06.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.539 ************************************ 00:28:06.539 START TEST nvmf_perf_adq 00:28:06.539 ************************************ 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:06.539 * Looking for test storage... 00:28:06.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:06.539 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:06.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.540 --rc genhtml_branch_coverage=1 00:28:06.540 --rc genhtml_function_coverage=1 00:28:06.540 --rc genhtml_legend=1 00:28:06.540 --rc geninfo_all_blocks=1 00:28:06.540 --rc geninfo_unexecuted_blocks=1 00:28:06.540 00:28:06.540 ' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:06.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.540 --rc genhtml_branch_coverage=1 00:28:06.540 --rc genhtml_function_coverage=1 00:28:06.540 --rc genhtml_legend=1 00:28:06.540 --rc geninfo_all_blocks=1 00:28:06.540 --rc geninfo_unexecuted_blocks=1 00:28:06.540 00:28:06.540 ' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:06.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.540 --rc genhtml_branch_coverage=1 00:28:06.540 --rc genhtml_function_coverage=1 00:28:06.540 --rc genhtml_legend=1 00:28:06.540 --rc geninfo_all_blocks=1 00:28:06.540 --rc geninfo_unexecuted_blocks=1 00:28:06.540 00:28:06.540 ' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:06.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.540 --rc genhtml_branch_coverage=1 00:28:06.540 --rc genhtml_function_coverage=1 00:28:06.540 --rc genhtml_legend=1 00:28:06.540 --rc geninfo_all_blocks=1 00:28:06.540 --rc geninfo_unexecuted_blocks=1 00:28:06.540 00:28:06.540 ' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.540 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.444 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.444 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.444 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.444 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:08.444 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:09.383 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:11.926 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.213 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.213 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:17.213 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.214 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.214 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:17.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:28:17.214 00:28:17.214 --- 10.0.0.2 ping statistics --- 00:28:17.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.214 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:28:17.214 00:28:17.214 --- 10.0.0.1 ping statistics --- 00:28:17.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.214 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3067628 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3067628 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3067628 ']' 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.214 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.214 [2024-10-13 19:58:06.454623] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:28:17.214 [2024-10-13 19:58:06.454780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.214 [2024-10-13 19:58:06.591013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.214 [2024-10-13 19:58:06.729040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.214 [2024-10-13 19:58:06.729121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.214 [2024-10-13 19:58:06.729147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.214 [2024-10-13 19:58:06.729171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.214 [2024-10-13 19:58:06.729191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.214 [2024-10-13 19:58:06.732245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.214 [2024-10-13 19:58:06.732319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.214 [2024-10-13 19:58:06.732425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.214 [2024-10-13 19:58:06.732432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.780 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.038 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.038 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:18.038 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.038 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.038 [2024-10-13 19:58:07.810170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.038 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.038 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:18.038 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.038 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.314 Malloc1 00:28:18.314 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.314 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:18.314 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.315 [2024-10-13 19:58:07.939232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3067795 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:18.315 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:20.222 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:20.222 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.222 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.222 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.222 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:20.222 "tick_rate": 2700000000, 00:28:20.222 "poll_groups": [ 00:28:20.222 { 00:28:20.222 "name": "nvmf_tgt_poll_group_000", 00:28:20.222 "admin_qpairs": 1, 00:28:20.222 "io_qpairs": 1, 00:28:20.222 "current_admin_qpairs": 1, 00:28:20.222 "current_io_qpairs": 1, 00:28:20.222 "pending_bdev_io": 0, 00:28:20.222 "completed_nvme_io": 16415, 00:28:20.222 "transports": [ 00:28:20.222 { 00:28:20.222 "trtype": "TCP" 00:28:20.222 } 00:28:20.222 ] 00:28:20.222 }, 00:28:20.222 { 00:28:20.222 "name": "nvmf_tgt_poll_group_001", 00:28:20.222 "admin_qpairs": 0, 00:28:20.222 "io_qpairs": 1, 00:28:20.222 "current_admin_qpairs": 0, 00:28:20.222 "current_io_qpairs": 1, 00:28:20.222 "pending_bdev_io": 0, 00:28:20.222 "completed_nvme_io": 16551, 00:28:20.222 "transports": [ 00:28:20.222 { 00:28:20.222 "trtype": "TCP" 00:28:20.222 } 00:28:20.222 ] 00:28:20.222 }, 00:28:20.222 { 00:28:20.222 "name": "nvmf_tgt_poll_group_002", 00:28:20.222 "admin_qpairs": 0, 00:28:20.222 "io_qpairs": 1, 00:28:20.222 "current_admin_qpairs": 0, 00:28:20.222 "current_io_qpairs": 1, 00:28:20.222 "pending_bdev_io": 0, 00:28:20.222 "completed_nvme_io": 17031, 00:28:20.222 "transports": [ 00:28:20.222 { 00:28:20.222 "trtype": "TCP" 00:28:20.222 } 00:28:20.222 ] 00:28:20.222 }, 00:28:20.222 { 00:28:20.222 "name": "nvmf_tgt_poll_group_003", 00:28:20.222 "admin_qpairs": 0, 00:28:20.222 "io_qpairs": 1, 00:28:20.222 "current_admin_qpairs": 0, 00:28:20.222 "current_io_qpairs": 1, 00:28:20.222 "pending_bdev_io": 0, 00:28:20.222 "completed_nvme_io": 17145, 00:28:20.222 "transports": [ 00:28:20.222 { 00:28:20.222 "trtype": "TCP" 00:28:20.222 } 00:28:20.222 ] 00:28:20.222 } 00:28:20.222 ] 00:28:20.222 }' 00:28:20.222 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:20.222 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:20.222 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:20.222 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:20.222 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3067795 00:28:28.328 Initializing NVMe Controllers 00:28:28.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:28.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:28.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:28.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:28.328 Initialization complete. Launching workers. 00:28:28.328 ======================================================== 00:28:28.328 Latency(us) 00:28:28.328 Device Information : IOPS MiB/s Average min max 00:28:28.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8874.80 34.67 7211.74 3225.55 11106.11 00:28:28.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9008.10 35.19 7104.10 2657.78 12495.44 00:28:28.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9264.79 36.19 6908.20 3171.42 10721.21 00:28:28.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8996.80 35.14 7113.48 2547.50 11753.99 00:28:28.328 ======================================================== 00:28:28.328 Total : 36144.48 141.19 7082.65 2547.50 12495.44 00:28:28.328 00:28:28.616 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:28.616 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:28.616 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:28.616 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:28.616 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:28.616 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:28.616 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:28.617 rmmod nvme_tcp 00:28:28.617 rmmod nvme_fabrics 00:28:28.617 rmmod nvme_keyring 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3067628 ']' 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3067628 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3067628 ']' 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3067628 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3067628 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3067628' 00:28:28.617 killing process with pid 3067628 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3067628 00:28:28.617 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3067628 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.016 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.921 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.921 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:31.921 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:31.921 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:32.856 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:35.382 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:40.654 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:40.654 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.654 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:40.654 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:40.655 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:40.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:28:40.655 00:28:40.655 --- 10.0.0.2 ping statistics --- 00:28:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.655 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:28:40.655 00:28:40.655 --- 10.0.0.1 ping statistics --- 00:28:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.655 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:40.655 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:40.655 net.core.busy_poll = 1 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:40.655 net.core.busy_read = 1 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3070663 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3070663 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3070663 ']' 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:40.655 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.655 [2024-10-13 19:58:30.211258] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:28:40.655 [2024-10-13 19:58:30.211411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.655 [2024-10-13 19:58:30.351951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.914 [2024-10-13 19:58:30.497292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.914 [2024-10-13 19:58:30.497375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.914 [2024-10-13 19:58:30.497412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.914 [2024-10-13 19:58:30.497439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.914 [2024-10-13 19:58:30.497459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.914 [2024-10-13 19:58:30.500522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.914 [2024-10-13 19:58:30.500598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.914 [2024-10-13 19:58:30.500983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.914 [2024-10-13 19:58:30.500994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.479 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.737 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 [2024-10-13 19:58:31.696414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 Malloc1 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.996 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 [2024-10-13 19:58:31.810990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.254 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.254 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3070821 00:28:42.254 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:42.254 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:44.154 "tick_rate": 2700000000, 00:28:44.154 "poll_groups": [ 00:28:44.154 { 00:28:44.154 "name": "nvmf_tgt_poll_group_000", 00:28:44.154 "admin_qpairs": 1, 00:28:44.154 "io_qpairs": 2, 00:28:44.154 "current_admin_qpairs": 1, 00:28:44.154 "current_io_qpairs": 2, 00:28:44.154 "pending_bdev_io": 0, 00:28:44.154 "completed_nvme_io": 19315, 00:28:44.154 "transports": [ 00:28:44.154 { 00:28:44.154 "trtype": "TCP" 00:28:44.154 } 00:28:44.154 ] 00:28:44.154 }, 00:28:44.154 { 00:28:44.154 "name": "nvmf_tgt_poll_group_001", 00:28:44.154 "admin_qpairs": 0, 00:28:44.154 "io_qpairs": 2, 00:28:44.154 "current_admin_qpairs": 0, 00:28:44.154 "current_io_qpairs": 2, 00:28:44.154 "pending_bdev_io": 0, 00:28:44.154 "completed_nvme_io": 19377, 00:28:44.154 "transports": [ 00:28:44.154 { 00:28:44.154 "trtype": "TCP" 00:28:44.154 } 00:28:44.154 ] 00:28:44.154 }, 00:28:44.154 { 00:28:44.154 "name": "nvmf_tgt_poll_group_002", 00:28:44.154 "admin_qpairs": 0, 00:28:44.154 "io_qpairs": 0, 00:28:44.154 "current_admin_qpairs": 0, 00:28:44.154 "current_io_qpairs": 0, 00:28:44.154 "pending_bdev_io": 0, 00:28:44.154 "completed_nvme_io": 0, 00:28:44.154 "transports": [ 00:28:44.154 { 00:28:44.154 "trtype": "TCP" 00:28:44.154 } 00:28:44.154 ] 00:28:44.154 }, 00:28:44.154 { 00:28:44.154 "name": "nvmf_tgt_poll_group_003", 00:28:44.154 "admin_qpairs": 0, 00:28:44.154 "io_qpairs": 0, 00:28:44.154 "current_admin_qpairs": 0, 00:28:44.154 "current_io_qpairs": 0, 00:28:44.154 "pending_bdev_io": 0, 00:28:44.154 "completed_nvme_io": 0, 00:28:44.154 "transports": [ 00:28:44.154 { 00:28:44.154 "trtype": "TCP" 00:28:44.154 } 00:28:44.154 ] 00:28:44.154 } 00:28:44.154 ] 00:28:44.154 }' 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:44.154 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3070821 00:28:52.271 Initializing NVMe Controllers 00:28:52.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:52.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:52.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:52.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:52.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:52.271 Initialization complete. Launching workers. 00:28:52.271 ======================================================== 00:28:52.271 Latency(us) 00:28:52.271 Device Information : IOPS MiB/s Average min max 00:28:52.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5213.80 20.37 12277.67 2840.86 57030.31 00:28:52.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5372.80 20.99 11913.74 2563.82 57177.62 00:28:52.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5523.20 21.57 11593.13 2434.10 57951.55 00:28:52.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5134.20 20.06 12471.39 2632.86 59097.66 00:28:52.271 ======================================================== 00:28:52.271 Total : 21243.99 82.98 12054.47 2434.10 59097.66 00:28:52.271 00:28:52.271 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:52.271 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:52.271 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:52.271 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:52.271 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:52.271 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:52.271 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:52.271 rmmod nvme_tcp 00:28:52.271 rmmod nvme_fabrics 00:28:52.530 rmmod nvme_keyring 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3070663 ']' 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3070663 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3070663 ']' 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3070663 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3070663 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3070663' 00:28:52.530 killing process with pid 3070663 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3070663 00:28:52.530 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3070663 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.903 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:55.805 00:28:55.805 real 0m49.462s 00:28:55.805 user 2m54.347s 00:28:55.805 sys 0m9.814s 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.805 ************************************ 00:28:55.805 END TEST nvmf_perf_adq 00:28:55.805 ************************************ 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:55.805 ************************************ 00:28:55.805 START TEST nvmf_shutdown 00:28:55.805 ************************************ 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:55.805 * Looking for test storage... 00:28:55.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:55.805 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:56.063 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:56.063 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.063 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:56.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.064 --rc genhtml_branch_coverage=1 00:28:56.064 --rc genhtml_function_coverage=1 00:28:56.064 --rc genhtml_legend=1 00:28:56.064 --rc geninfo_all_blocks=1 00:28:56.064 --rc geninfo_unexecuted_blocks=1 00:28:56.064 00:28:56.064 ' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:56.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.064 --rc genhtml_branch_coverage=1 00:28:56.064 --rc genhtml_function_coverage=1 00:28:56.064 --rc genhtml_legend=1 00:28:56.064 --rc geninfo_all_blocks=1 00:28:56.064 --rc geninfo_unexecuted_blocks=1 00:28:56.064 00:28:56.064 ' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:56.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.064 --rc genhtml_branch_coverage=1 00:28:56.064 --rc genhtml_function_coverage=1 00:28:56.064 --rc genhtml_legend=1 00:28:56.064 --rc geninfo_all_blocks=1 00:28:56.064 --rc geninfo_unexecuted_blocks=1 00:28:56.064 00:28:56.064 ' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:56.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.064 --rc genhtml_branch_coverage=1 00:28:56.064 --rc genhtml_function_coverage=1 00:28:56.064 --rc genhtml_legend=1 00:28:56.064 --rc geninfo_all_blocks=1 00:28:56.064 --rc geninfo_unexecuted_blocks=1 00:28:56.064 00:28:56.064 ' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:56.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:56.064 ************************************ 00:28:56.064 START TEST nvmf_shutdown_tc1 00:28:56.064 ************************************ 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:56.064 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.065 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:58.595 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:58.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:58.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:58.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.595 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.595 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.595 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.595 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.595 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.595 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.595 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.595 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.595 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:28:58.596 00:28:58.596 --- 10.0.0.2 ping statistics --- 00:28:58.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.596 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:28:58.596 00:28:58.596 --- 10.0.0.1 ping statistics --- 00:28:58.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.596 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3074133 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3074133 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3074133 ']' 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.596 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.596 [2024-10-13 19:58:48.208221] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:28:58.596 [2024-10-13 19:58:48.208352] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.596 [2024-10-13 19:58:48.345654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.854 [2024-10-13 19:58:48.475516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.854 [2024-10-13 19:58:48.475588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.854 [2024-10-13 19:58:48.475608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.854 [2024-10-13 19:58:48.475632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.854 [2024-10-13 19:58:48.475647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.854 [2024-10-13 19:58:48.478139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.854 [2024-10-13 19:58:48.478203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.854 [2024-10-13 19:58:48.478250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.854 [2024-10-13 19:58:48.478257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.421 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.421 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:59.421 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:59.421 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:59.421 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.679 [2024-10-13 19:58:49.250277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.679 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.679 Malloc1 00:28:59.679 [2024-10-13 19:58:49.405124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.679 Malloc2 00:28:59.937 Malloc3 00:28:59.937 Malloc4 00:29:00.195 Malloc5 00:29:00.195 Malloc6 00:29:00.195 Malloc7 00:29:00.452 Malloc8 00:29:00.452 Malloc9 00:29:00.710 Malloc10 00:29:00.710 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.710 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3074436 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3074436 /var/tmp/bdevperf.sock 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3074436 ']' 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.711 "hdgst": ${hdgst:-false}, 00:29:00.711 "ddgst": ${ddgst:-false} 00:29:00.711 }, 00:29:00.711 "method": "bdev_nvme_attach_controller" 00:29:00.711 } 00:29:00.711 EOF 00:29:00.711 )") 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.711 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.711 { 00:29:00.711 "params": { 00:29:00.711 "name": "Nvme$subsystem", 00:29:00.711 "trtype": "$TEST_TRANSPORT", 00:29:00.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.711 "adrfam": "ipv4", 00:29:00.711 "trsvcid": "$NVMF_PORT", 00:29:00.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.712 "hdgst": ${hdgst:-false}, 00:29:00.712 "ddgst": ${ddgst:-false} 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 } 00:29:00.712 EOF 00:29:00.712 )") 00:29:00.712 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:00.712 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:29:00.712 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:29:00.712 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme1", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme2", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme3", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme4", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme5", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme6", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme7", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme8", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme9", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 },{ 00:29:00.712 "params": { 00:29:00.712 "name": "Nvme10", 00:29:00.712 "trtype": "tcp", 00:29:00.712 "traddr": "10.0.0.2", 00:29:00.712 "adrfam": "ipv4", 00:29:00.712 "trsvcid": "4420", 00:29:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:00.712 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:00.712 "hdgst": false, 00:29:00.712 "ddgst": false 00:29:00.712 }, 00:29:00.712 "method": "bdev_nvme_attach_controller" 00:29:00.712 }' 00:29:00.712 [2024-10-13 19:58:50.416819] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:00.712 [2024-10-13 19:58:50.416950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:00.970 [2024-10-13 19:58:50.560833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.970 [2024-10-13 19:58:50.692149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3074436 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:03.540 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:04.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3074436 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:04.471 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3074133 00:29:04.471 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:04.471 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:04.471 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:29:04.471 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.472 { 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme$subsystem", 00:29:04.472 "trtype": "$TEST_TRANSPORT", 00:29:04.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "$NVMF_PORT", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.472 "hdgst": ${hdgst:-false}, 00:29:04.472 "ddgst": ${ddgst:-false} 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 } 00:29:04.472 EOF 00:29:04.472 )") 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:29:04.472 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme1", 00:29:04.472 "trtype": "tcp", 00:29:04.472 "traddr": "10.0.0.2", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "4420", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.472 "hdgst": false, 00:29:04.472 "ddgst": false 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 },{ 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme2", 00:29:04.472 "trtype": "tcp", 00:29:04.472 "traddr": "10.0.0.2", 00:29:04.472 "adrfam": "ipv4", 00:29:04.472 "trsvcid": "4420", 00:29:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:04.472 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:04.472 "hdgst": false, 00:29:04.472 "ddgst": false 00:29:04.472 }, 00:29:04.472 "method": "bdev_nvme_attach_controller" 00:29:04.472 },{ 00:29:04.472 "params": { 00:29:04.472 "name": "Nvme3", 00:29:04.473 "trtype": "tcp", 00:29:04.473 "traddr": "10.0.0.2", 00:29:04.473 "adrfam": "ipv4", 00:29:04.473 "trsvcid": "4420", 00:29:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:04.473 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:04.473 "hdgst": false, 00:29:04.473 "ddgst": false 00:29:04.473 }, 00:29:04.473 "method": "bdev_nvme_attach_controller" 00:29:04.473 },{ 00:29:04.473 "params": { 00:29:04.473 "name": "Nvme4", 00:29:04.473 "trtype": "tcp", 00:29:04.473 "traddr": "10.0.0.2", 00:29:04.473 "adrfam": "ipv4", 00:29:04.473 "trsvcid": "4420", 00:29:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:04.473 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:04.473 "hdgst": false, 00:29:04.473 "ddgst": false 00:29:04.473 }, 00:29:04.473 "method": "bdev_nvme_attach_controller" 00:29:04.473 },{ 00:29:04.473 "params": { 00:29:04.473 "name": "Nvme5", 00:29:04.473 "trtype": "tcp", 00:29:04.473 "traddr": "10.0.0.2", 00:29:04.473 "adrfam": "ipv4", 00:29:04.473 "trsvcid": "4420", 00:29:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:04.473 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:04.473 "hdgst": false, 00:29:04.473 "ddgst": false 00:29:04.473 }, 00:29:04.473 "method": "bdev_nvme_attach_controller" 00:29:04.473 },{ 00:29:04.473 "params": { 00:29:04.473 "name": "Nvme6", 00:29:04.473 "trtype": "tcp", 00:29:04.473 "traddr": "10.0.0.2", 00:29:04.473 "adrfam": "ipv4", 00:29:04.473 "trsvcid": "4420", 00:29:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:04.473 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:04.473 "hdgst": false, 00:29:04.473 "ddgst": false 00:29:04.473 }, 00:29:04.473 "method": "bdev_nvme_attach_controller" 00:29:04.473 },{ 00:29:04.473 "params": { 00:29:04.473 "name": "Nvme7", 00:29:04.473 "trtype": "tcp", 00:29:04.473 "traddr": "10.0.0.2", 00:29:04.473 "adrfam": "ipv4", 00:29:04.473 "trsvcid": "4420", 00:29:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:04.473 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:04.473 "hdgst": false, 00:29:04.473 "ddgst": false 00:29:04.473 }, 00:29:04.473 "method": "bdev_nvme_attach_controller" 00:29:04.473 },{ 00:29:04.473 "params": { 00:29:04.473 "name": "Nvme8", 00:29:04.473 "trtype": "tcp", 00:29:04.473 "traddr": "10.0.0.2", 00:29:04.473 "adrfam": "ipv4", 00:29:04.473 "trsvcid": "4420", 00:29:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:04.473 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:04.473 "hdgst": false, 00:29:04.473 "ddgst": false 00:29:04.473 }, 00:29:04.473 "method": "bdev_nvme_attach_controller" 00:29:04.473 },{ 00:29:04.473 "params": { 00:29:04.473 "name": "Nvme9", 00:29:04.473 "trtype": "tcp", 00:29:04.473 "traddr": "10.0.0.2", 00:29:04.473 "adrfam": "ipv4", 00:29:04.473 "trsvcid": "4420", 00:29:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:04.473 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:04.473 "hdgst": false, 00:29:04.473 "ddgst": false 00:29:04.473 }, 00:29:04.473 "method": "bdev_nvme_attach_controller" 00:29:04.473 },{ 00:29:04.473 "params": { 00:29:04.473 "name": "Nvme10", 00:29:04.473 "trtype": "tcp", 00:29:04.473 "traddr": "10.0.0.2", 00:29:04.473 "adrfam": "ipv4", 00:29:04.473 "trsvcid": "4420", 00:29:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:04.473 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:04.473 "hdgst": false, 00:29:04.473 "ddgst": false 00:29:04.473 }, 00:29:04.473 "method": "bdev_nvme_attach_controller" 00:29:04.473 }' 00:29:04.473 [2024-10-13 19:58:54.252054] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:04.473 [2024-10-13 19:58:54.252196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074980 ] 00:29:04.731 [2024-10-13 19:58:54.382583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.731 [2024-10-13 19:58:54.510517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.629 Running I/O for 1 seconds... 00:29:07.820 1477.00 IOPS, 92.31 MiB/s 00:29:07.820 Latency(us) 00:29:07.820 [2024-10-13T17:58:57.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.820 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme1n1 : 1.19 161.39 10.09 0.00 0.00 392543.45 28350.39 333990.87 00:29:07.820 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme2n1 : 1.12 171.65 10.73 0.00 0.00 361892.72 26020.22 318456.41 00:29:07.820 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme3n1 : 1.16 220.04 13.75 0.00 0.00 277783.51 22427.88 268746.15 00:29:07.820 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme4n1 : 1.18 216.68 13.54 0.00 0.00 276869.12 25243.50 293601.28 00:29:07.820 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme5n1 : 1.17 164.52 10.28 0.00 0.00 358479.90 26214.40 301368.51 00:29:07.820 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme6n1 : 1.21 216.51 13.53 0.00 0.00 266398.03 11019.76 315349.52 00:29:07.820 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme7n1 : 1.19 214.52 13.41 0.00 0.00 265587.29 37865.24 295154.73 00:29:07.820 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme8n1 : 1.21 211.49 13.22 0.00 0.00 264747.61 25243.50 304475.40 00:29:07.820 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme9n1 : 1.20 167.75 10.48 0.00 0.00 321558.36 6553.60 352632.23 00:29:07.820 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.820 Verification LBA range: start 0x0 length 0x400 00:29:07.820 Nvme10n1 : 1.22 210.16 13.14 0.00 0.00 256853.90 24175.50 302921.96 00:29:07.820 [2024-10-13T17:58:57.635Z] =================================================================================================================== 00:29:07.820 [2024-10-13T17:58:57.635Z] Total : 1954.71 122.17 0.00 0.00 298254.71 6553.60 352632.23 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.752 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.752 rmmod nvme_tcp 00:29:08.752 rmmod nvme_fabrics 00:29:08.752 rmmod nvme_keyring 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3074133 ']' 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3074133 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3074133 ']' 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3074133 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3074133 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3074133' 00:29:09.010 killing process with pid 3074133 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3074133 00:29:09.010 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3074133 00:29:11.539 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:11.539 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:11.539 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:11.539 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:11.539 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:29:11.539 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:11.539 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:29:11.798 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.798 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.798 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.798 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.798 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:13.774 00:29:13.774 real 0m17.660s 00:29:13.774 user 0m56.968s 00:29:13.774 sys 0m4.100s 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.774 ************************************ 00:29:13.774 END TEST nvmf_shutdown_tc1 00:29:13.774 ************************************ 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:13.774 ************************************ 00:29:13.774 START TEST nvmf_shutdown_tc2 00:29:13.774 ************************************ 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:13.774 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:13.775 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:13.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:13.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:13.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:29:13.775 00:29:13.775 --- 10.0.0.2 ping statistics --- 00:29:13.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.775 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:29:13.775 00:29:13.775 --- 10.0.0.1 ping statistics --- 00:29:13.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.775 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:13.775 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3076144 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3076144 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3076144 ']' 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:14.034 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.034 [2024-10-13 19:59:03.690112] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:14.034 [2024-10-13 19:59:03.690238] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.034 [2024-10-13 19:59:03.830859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.292 [2024-10-13 19:59:03.973691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.292 [2024-10-13 19:59:03.973780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.292 [2024-10-13 19:59:03.973805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.292 [2024-10-13 19:59:03.973829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.292 [2024-10-13 19:59:03.973848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.292 [2024-10-13 19:59:03.976745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.292 [2024-10-13 19:59:03.976853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.292 [2024-10-13 19:59:03.976896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.292 [2024-10-13 19:59:03.976902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.226 [2024-10-13 19:59:04.720518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.226 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.226 Malloc1 00:29:15.226 [2024-10-13 19:59:04.865313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.226 Malloc2 00:29:15.226 Malloc3 00:29:15.484 Malloc4 00:29:15.484 Malloc5 00:29:15.742 Malloc6 00:29:15.742 Malloc7 00:29:16.001 Malloc8 00:29:16.001 Malloc9 00:29:16.001 Malloc10 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3076456 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3076456 /var/tmp/bdevperf.sock 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3076456 ']' 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.001 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.001 { 00:29:16.001 "params": { 00:29:16.001 "name": "Nvme$subsystem", 00:29:16.001 "trtype": "$TEST_TRANSPORT", 00:29:16.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.001 "adrfam": "ipv4", 00:29:16.001 "trsvcid": "$NVMF_PORT", 00:29:16.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.001 "hdgst": ${hdgst:-false}, 00:29:16.001 "ddgst": ${ddgst:-false} 00:29:16.001 }, 00:29:16.001 "method": "bdev_nvme_attach_controller" 00:29:16.001 } 00:29:16.001 EOF 00:29:16.001 )") 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.259 { 00:29:16.259 "params": { 00:29:16.259 "name": "Nvme$subsystem", 00:29:16.259 "trtype": "$TEST_TRANSPORT", 00:29:16.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.259 "adrfam": "ipv4", 00:29:16.259 "trsvcid": "$NVMF_PORT", 00:29:16.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.259 "hdgst": ${hdgst:-false}, 00:29:16.259 "ddgst": ${ddgst:-false} 00:29:16.259 }, 00:29:16.259 "method": "bdev_nvme_attach_controller" 00:29:16.259 } 00:29:16.259 EOF 00:29:16.259 )") 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.259 { 00:29:16.259 "params": { 00:29:16.259 "name": "Nvme$subsystem", 00:29:16.259 "trtype": "$TEST_TRANSPORT", 00:29:16.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.259 "adrfam": "ipv4", 00:29:16.259 "trsvcid": "$NVMF_PORT", 00:29:16.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.259 "hdgst": ${hdgst:-false}, 00:29:16.259 "ddgst": ${ddgst:-false} 00:29:16.259 }, 00:29:16.259 "method": "bdev_nvme_attach_controller" 00:29:16.259 } 00:29:16.259 EOF 00:29:16.259 )") 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.259 { 00:29:16.259 "params": { 00:29:16.259 "name": "Nvme$subsystem", 00:29:16.259 "trtype": "$TEST_TRANSPORT", 00:29:16.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.259 "adrfam": "ipv4", 00:29:16.259 "trsvcid": "$NVMF_PORT", 00:29:16.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.259 "hdgst": ${hdgst:-false}, 00:29:16.259 "ddgst": ${ddgst:-false} 00:29:16.259 }, 00:29:16.259 "method": "bdev_nvme_attach_controller" 00:29:16.259 } 00:29:16.259 EOF 00:29:16.259 )") 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.259 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.260 { 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme$subsystem", 00:29:16.260 "trtype": "$TEST_TRANSPORT", 00:29:16.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "$NVMF_PORT", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.260 "hdgst": ${hdgst:-false}, 00:29:16.260 "ddgst": ${ddgst:-false} 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 } 00:29:16.260 EOF 00:29:16.260 )") 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.260 { 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme$subsystem", 00:29:16.260 "trtype": "$TEST_TRANSPORT", 00:29:16.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "$NVMF_PORT", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.260 "hdgst": ${hdgst:-false}, 00:29:16.260 "ddgst": ${ddgst:-false} 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 } 00:29:16.260 EOF 00:29:16.260 )") 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.260 { 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme$subsystem", 00:29:16.260 "trtype": "$TEST_TRANSPORT", 00:29:16.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "$NVMF_PORT", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.260 "hdgst": ${hdgst:-false}, 00:29:16.260 "ddgst": ${ddgst:-false} 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 } 00:29:16.260 EOF 00:29:16.260 )") 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.260 { 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme$subsystem", 00:29:16.260 "trtype": "$TEST_TRANSPORT", 00:29:16.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "$NVMF_PORT", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.260 "hdgst": ${hdgst:-false}, 00:29:16.260 "ddgst": ${ddgst:-false} 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 } 00:29:16.260 EOF 00:29:16.260 )") 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.260 { 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme$subsystem", 00:29:16.260 "trtype": "$TEST_TRANSPORT", 00:29:16.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "$NVMF_PORT", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.260 "hdgst": ${hdgst:-false}, 00:29:16.260 "ddgst": ${ddgst:-false} 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 } 00:29:16.260 EOF 00:29:16.260 )") 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:16.260 { 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme$subsystem", 00:29:16.260 "trtype": "$TEST_TRANSPORT", 00:29:16.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "$NVMF_PORT", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.260 "hdgst": ${hdgst:-false}, 00:29:16.260 "ddgst": ${ddgst:-false} 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 } 00:29:16.260 EOF 00:29:16.260 )") 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:29:16.260 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme1", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme2", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme3", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme4", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme5", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme6", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme7", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme8", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme9", 00:29:16.260 "trtype": "tcp", 00:29:16.260 "traddr": "10.0.0.2", 00:29:16.260 "adrfam": "ipv4", 00:29:16.260 "trsvcid": "4420", 00:29:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:16.260 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:16.260 "hdgst": false, 00:29:16.260 "ddgst": false 00:29:16.260 }, 00:29:16.260 "method": "bdev_nvme_attach_controller" 00:29:16.260 },{ 00:29:16.260 "params": { 00:29:16.260 "name": "Nvme10", 00:29:16.260 "trtype": "tcp", 00:29:16.261 "traddr": "10.0.0.2", 00:29:16.261 "adrfam": "ipv4", 00:29:16.261 "trsvcid": "4420", 00:29:16.261 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:16.261 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:16.261 "hdgst": false, 00:29:16.261 "ddgst": false 00:29:16.261 }, 00:29:16.261 "method": "bdev_nvme_attach_controller" 00:29:16.261 }' 00:29:16.261 [2024-10-13 19:59:05.901744] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:16.261 [2024-10-13 19:59:05.901907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076456 ] 00:29:16.261 [2024-10-13 19:59:06.030743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.518 [2024-10-13 19:59:06.159068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.045 Running I/O for 10 seconds... 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:19.045 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:19.303 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3076456 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3076456 ']' 00:29:19.561 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3076456 00:29:19.562 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:19.562 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.562 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076456 00:29:19.562 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:19.562 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:19.562 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076456' 00:29:19.562 killing process with pid 3076456 00:29:19.562 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3076456 00:29:19.562 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3076456 00:29:19.820 Received shutdown signal, test time was about 0.981794 seconds 00:29:19.820 00:29:19.820 Latency(us) 00:29:19.820 [2024-10-13T17:59:09.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.820 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme1n1 : 0.95 201.52 12.59 0.00 0.00 312915.69 25437.68 299815.06 00:29:19.820 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme2n1 : 0.95 207.46 12.97 0.00 0.00 295718.91 6262.33 307582.29 00:29:19.820 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme3n1 : 0.92 208.48 13.03 0.00 0.00 290062.79 21456.97 306028.85 00:29:19.820 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme4n1 : 0.93 205.72 12.86 0.00 0.00 287303.55 20874.43 298261.62 00:29:19.820 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme5n1 : 0.97 198.08 12.38 0.00 0.00 292887.01 24563.86 284280.60 00:29:19.820 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme6n1 : 0.97 196.99 12.31 0.00 0.00 288391.02 22524.97 302921.96 00:29:19.820 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme7n1 : 0.94 204.89 12.81 0.00 0.00 269266.49 29127.11 302921.96 00:29:19.820 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme8n1 : 0.96 203.95 12.75 0.00 0.00 264094.97 3665.16 281173.71 00:29:19.820 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme9n1 : 0.98 195.73 12.23 0.00 0.00 270903.37 24175.50 327777.09 00:29:19.820 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.820 Verification LBA range: start 0x0 length 0x400 00:29:19.820 Nvme10n1 : 0.91 140.93 8.81 0.00 0.00 360305.40 24855.13 344865.00 00:29:19.820 [2024-10-13T17:59:09.635Z] =================================================================================================================== 00:29:19.820 [2024-10-13T17:59:09.635Z] Total : 1963.74 122.73 0.00 0.00 290825.99 3665.16 344865.00 00:29:20.753 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3076144 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.686 rmmod nvme_tcp 00:29:21.686 rmmod nvme_fabrics 00:29:21.686 rmmod nvme_keyring 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3076144 ']' 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3076144 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3076144 ']' 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3076144 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076144 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076144' 00:29:21.686 killing process with pid 3076144 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3076144 00:29:21.686 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3076144 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.967 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.868 00:29:26.868 real 0m12.782s 00:29:26.868 user 0m43.857s 00:29:26.868 sys 0m1.911s 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.868 ************************************ 00:29:26.868 END TEST nvmf_shutdown_tc2 00:29:26.868 ************************************ 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:26.868 ************************************ 00:29:26.868 START TEST nvmf_shutdown_tc3 00:29:26.868 ************************************ 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:26.868 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:26.868 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:26.868 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.868 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:26.869 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:29:26.869 00:29:26.869 --- 10.0.0.2 ping statistics --- 00:29:26.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.869 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:29:26.869 00:29:26.869 --- 10.0.0.1 ping statistics --- 00:29:26.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.869 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3077762 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3077762 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3077762 ']' 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.869 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.869 [2024-10-13 19:59:16.527881] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:26.869 [2024-10-13 19:59:16.528039] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.869 [2024-10-13 19:59:16.672169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:27.127 [2024-10-13 19:59:16.817947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.127 [2024-10-13 19:59:16.818024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.127 [2024-10-13 19:59:16.818050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.127 [2024-10-13 19:59:16.818075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.127 [2024-10-13 19:59:16.818094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.127 [2024-10-13 19:59:16.820956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.127 [2024-10-13 19:59:16.821072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.127 [2024-10-13 19:59:16.821122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.127 [2024-10-13 19:59:16.821125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.693 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.693 [2024-10-13 19:59:17.490073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.951 19:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.951 Malloc1 00:29:27.951 [2024-10-13 19:59:17.632979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.951 Malloc2 00:29:28.210 Malloc3 00:29:28.210 Malloc4 00:29:28.210 Malloc5 00:29:28.467 Malloc6 00:29:28.467 Malloc7 00:29:28.725 Malloc8 00:29:28.725 Malloc9 00:29:28.725 Malloc10 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3078075 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3078075 /var/tmp/bdevperf.sock 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3078075 ']' 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:28.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.984 { 00:29:28.984 "params": { 00:29:28.984 "name": "Nvme$subsystem", 00:29:28.984 "trtype": "$TEST_TRANSPORT", 00:29:28.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.984 "adrfam": "ipv4", 00:29:28.984 "trsvcid": "$NVMF_PORT", 00:29:28.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.984 "hdgst": ${hdgst:-false}, 00:29:28.984 "ddgst": ${ddgst:-false} 00:29:28.984 }, 00:29:28.984 "method": "bdev_nvme_attach_controller" 00:29:28.984 } 00:29:28.984 EOF 00:29:28.984 )") 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.984 { 00:29:28.984 "params": { 00:29:28.984 "name": "Nvme$subsystem", 00:29:28.984 "trtype": "$TEST_TRANSPORT", 00:29:28.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.984 "adrfam": "ipv4", 00:29:28.984 "trsvcid": "$NVMF_PORT", 00:29:28.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.984 "hdgst": ${hdgst:-false}, 00:29:28.984 "ddgst": ${ddgst:-false} 00:29:28.984 }, 00:29:28.984 "method": "bdev_nvme_attach_controller" 00:29:28.984 } 00:29:28.984 EOF 00:29:28.984 )") 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.984 { 00:29:28.984 "params": { 00:29:28.984 "name": "Nvme$subsystem", 00:29:28.984 "trtype": "$TEST_TRANSPORT", 00:29:28.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.984 "adrfam": "ipv4", 00:29:28.984 "trsvcid": "$NVMF_PORT", 00:29:28.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.984 "hdgst": ${hdgst:-false}, 00:29:28.984 "ddgst": ${ddgst:-false} 00:29:28.984 }, 00:29:28.984 "method": "bdev_nvme_attach_controller" 00:29:28.984 } 00:29:28.984 EOF 00:29:28.984 )") 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.984 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.984 { 00:29:28.984 "params": { 00:29:28.984 "name": "Nvme$subsystem", 00:29:28.984 "trtype": "$TEST_TRANSPORT", 00:29:28.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.984 "adrfam": "ipv4", 00:29:28.984 "trsvcid": "$NVMF_PORT", 00:29:28.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.985 "hdgst": ${hdgst:-false}, 00:29:28.985 "ddgst": ${ddgst:-false} 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 } 00:29:28.985 EOF 00:29:28.985 )") 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.985 { 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme$subsystem", 00:29:28.985 "trtype": "$TEST_TRANSPORT", 00:29:28.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "$NVMF_PORT", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.985 "hdgst": ${hdgst:-false}, 00:29:28.985 "ddgst": ${ddgst:-false} 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 } 00:29:28.985 EOF 00:29:28.985 )") 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.985 { 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme$subsystem", 00:29:28.985 "trtype": "$TEST_TRANSPORT", 00:29:28.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "$NVMF_PORT", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.985 "hdgst": ${hdgst:-false}, 00:29:28.985 "ddgst": ${ddgst:-false} 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 } 00:29:28.985 EOF 00:29:28.985 )") 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.985 { 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme$subsystem", 00:29:28.985 "trtype": "$TEST_TRANSPORT", 00:29:28.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "$NVMF_PORT", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.985 "hdgst": ${hdgst:-false}, 00:29:28.985 "ddgst": ${ddgst:-false} 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 } 00:29:28.985 EOF 00:29:28.985 )") 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.985 { 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme$subsystem", 00:29:28.985 "trtype": "$TEST_TRANSPORT", 00:29:28.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "$NVMF_PORT", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.985 "hdgst": ${hdgst:-false}, 00:29:28.985 "ddgst": ${ddgst:-false} 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 } 00:29:28.985 EOF 00:29:28.985 )") 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.985 { 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme$subsystem", 00:29:28.985 "trtype": "$TEST_TRANSPORT", 00:29:28.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "$NVMF_PORT", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.985 "hdgst": ${hdgst:-false}, 00:29:28.985 "ddgst": ${ddgst:-false} 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 } 00:29:28.985 EOF 00:29:28.985 )") 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.985 { 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme$subsystem", 00:29:28.985 "trtype": "$TEST_TRANSPORT", 00:29:28.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "$NVMF_PORT", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.985 "hdgst": ${hdgst:-false}, 00:29:28.985 "ddgst": ${ddgst:-false} 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 } 00:29:28.985 EOF 00:29:28.985 )") 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:29:28.985 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme1", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 },{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme2", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 },{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme3", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 },{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme4", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 },{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme5", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 },{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme6", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 },{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme7", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 },{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme8", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.985 }, 00:29:28.985 "method": "bdev_nvme_attach_controller" 00:29:28.985 },{ 00:29:28.985 "params": { 00:29:28.985 "name": "Nvme9", 00:29:28.985 "trtype": "tcp", 00:29:28.985 "traddr": "10.0.0.2", 00:29:28.985 "adrfam": "ipv4", 00:29:28.985 "trsvcid": "4420", 00:29:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:28.985 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:28.985 "hdgst": false, 00:29:28.985 "ddgst": false 00:29:28.986 }, 00:29:28.986 "method": "bdev_nvme_attach_controller" 00:29:28.986 },{ 00:29:28.986 "params": { 00:29:28.986 "name": "Nvme10", 00:29:28.986 "trtype": "tcp", 00:29:28.986 "traddr": "10.0.0.2", 00:29:28.986 "adrfam": "ipv4", 00:29:28.986 "trsvcid": "4420", 00:29:28.986 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:28.986 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:28.986 "hdgst": false, 00:29:28.986 "ddgst": false 00:29:28.986 }, 00:29:28.986 "method": "bdev_nvme_attach_controller" 00:29:28.986 }' 00:29:28.986 [2024-10-13 19:59:18.660249] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:28.986 [2024-10-13 19:59:18.660404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3078075 ] 00:29:28.986 [2024-10-13 19:59:18.793697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.243 [2024-10-13 19:59:18.920385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.771 Running I/O for 10 seconds... 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=15 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 15 -ge 100 ']' 00:29:31.771 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:32.029 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3077762 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3077762 ']' 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3077762 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:32.287 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3077762 00:29:32.561 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:32.562 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:32.562 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3077762' 00:29:32.562 killing process with pid 3077762 00:29:32.562 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3077762 00:29:32.562 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3077762 00:29:32.562 [2024-10-13 19:59:22.133629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.133989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.134914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.562 [2024-10-13 19:59:22.143461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.143981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.144274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.147984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.563 [2024-10-13 19:59:22.148597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.564 [2024-10-13 19:59:22.148659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.564 [2024-10-13 19:59:22.148704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.564 [2024-10-13 19:59:22.148723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-13 19:59:22.148742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.564 with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.564 [2024-10-13 19:59:22.148783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.564 [2024-10-13 19:59:22.148801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.564 [2024-10-13 19:59:22.148819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-13 19:59:22.148837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.564 with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.148996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.149014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.149030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.149073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-10-13 19:59:22.149048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same id:0 cdw10:00000000 cdw11:00000000 00:29:32.564 with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.149107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.564 [2024-10-13 19:59:22.149131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.564 [2024-10-13 19:59:22.149151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.564 [2024-10-13 19:59:22.149173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.564 [2024-10-13 19:59:22.149194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.564 [2024-10-13 19:59:22.149216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.564 [2024-10-13 19:59:22.149237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.564 [2024-10-13 19:59:22.149257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.152914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.152970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.152995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.564 [2024-10-13 19:59:22.153817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.153983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.154147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.156984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.157986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.158004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.158022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.158039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.159657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.159704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.159725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.565 [2024-10-13 19:59:22.159744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.159984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.160873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.566 [2024-10-13 19:59:22.163996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.566 [2024-10-13 19:59:22.164052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.566 [2024-10-13 19:59:22.164117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.566 [2024-10-13 19:59:22.164143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.566 [2024-10-13 19:59:22.164170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.566 [2024-10-13 19:59:22.164193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.566 [2024-10-13 19:59:22.164218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.566 [2024-10-13 19:59:22.164240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.566 [2024-10-13 19:59:22.164284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.566 [2024-10-13 19:59:22.164308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.566 [2024-10-13 19:59:22.164333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.566 [2024-10-13 19:59:22.164355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.566 [2024-10-13 19:59:22.164379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.164959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.164983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.165967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.165989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.166013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.166034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.166059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.166080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.166105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.166127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.166151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.166172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.166196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.166190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.567 [2024-10-13 19:59:22.166218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.166226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.567 [2024-10-13 19:59:22.166243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.166246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.567 [2024-10-13 19:59:22.166265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-10-13 19:59:22.166264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:32.567 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.567 [2024-10-13 19:59:22.166285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.567 [2024-10-13 19:59:22.166292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.567 [2024-10-13 19:59:22.166303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-10-13 19:59:22.166339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1with the state(6) to be set 00:29:32.568 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 19:59:22.166536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-10-13 19:59:22.166736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-10-13 19:59:22.166881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-10-13 19:59:22.166901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:32.568 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.166920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 19:59:22.166955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.166980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.166992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.167013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.167032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-10-13 19:59:22.167050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:32.568 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.167070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.167089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.167107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-10-13 19:59:22.167125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.167163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.167181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 19:59:22.167199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.568 [2024-10-13 19:59:22.167238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.568 [2024-10-13 19:59:22.167249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.568 [2024-10-13 19:59:22.167256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.569 [2024-10-13 19:59:22.167343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.167666] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001fac00 was disconnected and freed. reset controller. 00:29:32.569 [2024-10-13 19:59:22.168065] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.569 [2024-10-13 19:59:22.168205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:32.569 [2024-10-13 19:59:22.168302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.168595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.168856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.168959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.168979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.169110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.169349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.169611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.569 [2024-10-13 19:59:22.169765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.569 [2024-10-13 19:59:22.169784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.169826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:32.569 [2024-10-13 19:59:22.169915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.169952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.169972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.169990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.169997] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.569 [2024-10-13 19:59:22.170008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.569 [2024-10-13 19:59:22.170245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.170988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.171007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.171024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.171041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.171060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.171078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.171095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.171879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:32.570 [2024-10-13 19:59:22.171948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:32.570 [2024-10-13 19:59:22.172006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.172563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.570 [2024-10-13 19:59:22.172571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.570 [2024-10-13 19:59:22.172585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.570 [2024-10-13 19:59:22.172593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12[2024-10-13 19:59:22.172612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-10-13 19:59:22.172633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:32.571 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.172653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.172673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.172691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12[2024-10-13 19:59:22.172709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.172764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-10-13 19:59:22.172782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:12with the state(6) to be set 00:29:32.571 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.172805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.172824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.172841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.172859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.172896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.172915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.172932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.172950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.172985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.172990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.173003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 19:59:22.173038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:12[2024-10-13 19:59:22.173063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.173102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 19:59:22.173138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.173194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 19:59:22.173230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.173285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:12[2024-10-13 19:59:22.173303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-10-13 19:59:22.173322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:32.571 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.173345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.173380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.173448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.571 [2024-10-13 19:59:22.173503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.571 [2024-10-13 19:59:22.173516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.571 [2024-10-13 19:59:22.173521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.173557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.173575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 19:59:22.173592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.173630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.173649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.173674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.173693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.173746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.173763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.173780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-10-13 19:59:22.173779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:12with the state(6) to be set 00:29:32.572 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.173803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.173826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.173863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.173903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.173933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.173959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.173981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.174962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.174986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.175006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.175030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.175050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.175073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.175093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.175116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.175141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.175165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.175186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.572 [2024-10-13 19:59:22.175207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:29:32.572 [2024-10-13 19:59:22.175553] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9f80 was disconnected and freed. reset controller. 00:29:32.572 [2024-10-13 19:59:22.175643] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.572 [2024-10-13 19:59:22.177504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.572 [2024-10-13 19:59:22.177539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.177967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.177987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.178959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.178982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.573 [2024-10-13 19:59:22.179586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-10-13 19:59:22.179608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.179633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.179653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.179678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.179699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.179739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.179761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.179784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.179805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.179828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.179849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.179874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.179894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.179918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.179939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.179963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.179987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.180017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.180040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.180072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.180094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.180119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.180141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.180166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.180187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.180212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.180233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.180258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.180279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.180304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.199378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.199556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.199584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.199611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.199633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.199660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.199682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.199709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.199731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.199756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.199778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.199804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-10-13 19:59:22.199834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.199859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(6) to be set 00:29:32.574 [2024-10-13 19:59:22.200252] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001fa200 was disconnected and freed. reset controller. 00:29:32.574 [2024-10-13 19:59:22.201557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.574 [2024-10-13 19:59:22.201829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.574 [2024-10-13 19:59:22.201869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:32.574 [2024-10-13 19:59:22.201896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:32.574 [2024-10-13 19:59:22.201973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.574 [2024-10-13 19:59:22.202001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.202027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.574 [2024-10-13 19:59:22.202050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.202073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.574 [2024-10-13 19:59:22.202094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.202116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.574 [2024-10-13 19:59:22.202137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.202157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:32.574 [2024-10-13 19:59:22.202228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:32.574 [2024-10-13 19:59:22.202272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:32.574 [2024-10-13 19:59:22.202379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.574 [2024-10-13 19:59:22.202421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.202446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.574 [2024-10-13 19:59:22.202468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.202490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.574 [2024-10-13 19:59:22.202512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.202534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.574 [2024-10-13 19:59:22.202555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.574 [2024-10-13 19:59:22.202581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:32.574 [2024-10-13 19:59:22.202627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:32.574 [2024-10-13 19:59:22.202678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:32.574 [2024-10-13 19:59:22.202741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:32.574 [2024-10-13 19:59:22.202828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:32.574 [2024-10-13 19:59:22.209142] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.574 [2024-10-13 19:59:22.210929] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.574 [2024-10-13 19:59:22.212698] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.574 [2024-10-13 19:59:22.221539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:32.574 [2024-10-13 19:59:22.221746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.574 [2024-10-13 19:59:22.221784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:32.575 [2024-10-13 19:59:22.221810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:32.575 [2024-10-13 19:59:22.221861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:32.575 [2024-10-13 19:59:22.221899] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.575 [2024-10-13 19:59:22.221946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:32.575 [2024-10-13 19:59:22.222043] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.575 [2024-10-13 19:59:22.222080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:32.575 [2024-10-13 19:59:22.222814] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.575 [2024-10-13 19:59:22.222934] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.575 [2024-10-13 19:59:22.223116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.223973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.223998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.224972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.224993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.575 [2024-10-13 19:59:22.225017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.575 [2024-10-13 19:59:22.225038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.225978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.225999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.226024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.226044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.226068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.226089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.226113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.226134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.226159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.226180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.226204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.226225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.226249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.226270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.226291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:32.576 task offset: 25600 on job bdev=Nvme6n1 fails 00:29:32.576 1384.86 IOPS, 86.55 MiB/s [2024-10-13T17:59:22.391Z] [2024-10-13 19:59:22.227998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.576 [2024-10-13 19:59:22.228035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:32.576 [2024-10-13 19:59:22.228059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:32.576 [2024-10-13 19:59:22.228085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:32.576 [2024-10-13 19:59:22.228106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:32.576 [2024-10-13 19:59:22.228131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:32.576 [2024-10-13 19:59:22.228868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.228900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.228931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.228954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.228979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.576 [2024-10-13 19:59:22.229486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.576 [2024-10-13 19:59:22.229512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.229963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.229988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.230965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.230989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.577 [2024-10-13 19:59:22.231492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.577 [2024-10-13 19:59:22.231518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.231972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.231993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa480 is same with the state(6) to be set 00:29:32.578 [2024-10-13 19:59:22.233534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.233598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.233646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.233693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.233756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.233801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.233847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.233907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.233955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.233976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.578 [2024-10-13 19:59:22.234931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.578 [2024-10-13 19:59:22.234952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.234975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.234995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.235974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.235995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.236614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.236638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa700 is same with the state(6) to be set 00:29:32.579 [2024-10-13 19:59:22.238157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.238187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.238218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.238240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.238264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.238286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.238310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.238331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.579 [2024-10-13 19:59:22.238354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.579 [2024-10-13 19:59:22.238390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.238968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.238992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.239969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.239990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.580 [2024-10-13 19:59:22.240447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.580 [2024-10-13 19:59:22.240474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.240973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.240993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.241016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.241037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.241065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.241086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.241109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.241129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.241152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.241173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.241196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.241217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.241240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.241261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.241282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:32.581 [2024-10-13 19:59:22.242867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.242902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.242933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.242956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.242980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.243973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.243994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.244019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.581 [2024-10-13 19:59:22.244041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.581 [2024-10-13 19:59:22.244066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.244789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.244813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.256956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.256980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.257002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.257031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.257054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.257078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.257100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.257125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.257147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.257172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.257193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.257218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.257239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.257264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.257286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.257311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.582 [2024-10-13 19:59:22.257332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.582 [2024-10-13 19:59:22.257357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:32.582 [2024-10-13 19:59:22.259182] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.582 [2024-10-13 19:59:22.259287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:32.582 [2024-10-13 19:59:22.259347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.582 [2024-10-13 19:59:22.259379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:32.582 [2024-10-13 19:59:22.259416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:32.583 [2024-10-13 19:59:22.259444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:32.583 [2024-10-13 19:59:22.259597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:32.583 [2024-10-13 19:59:22.259635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.583 [2024-10-13 19:59:22.259658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.583 [2024-10-13 19:59:22.259696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.583 [2024-10-13 19:59:22.259794] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.583 [2024-10-13 19:59:22.259850] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.583 [2024-10-13 19:59:22.259881] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.583 [2024-10-13 19:59:22.260749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:32.583 [2024-10-13 19:59:22.260786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.583 [2024-10-13 19:59:22.261018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.583 [2024-10-13 19:59:22.261070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:32.583 [2024-10-13 19:59:22.261096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:32.583 [2024-10-13 19:59:22.261235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.583 [2024-10-13 19:59:22.261268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:32.583 [2024-10-13 19:59:22.261292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:32.583 [2024-10-13 19:59:22.261423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.583 [2024-10-13 19:59:22.261456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:32.583 [2024-10-13 19:59:22.261478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:32.583 [2024-10-13 19:59:22.261584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.583 [2024-10-13 19:59:22.261617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:32.583 [2024-10-13 19:59:22.261639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:32.583 [2024-10-13 19:59:22.261660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:32.583 [2024-10-13 19:59:22.261679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:32.583 [2024-10-13 19:59:22.261698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:32.583 [2024-10-13 19:59:22.263751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.263787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.263823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.263854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.263878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.263900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.263925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.263946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.263970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.263992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.264957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.264981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.583 [2024-10-13 19:59:22.265001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.583 [2024-10-13 19:59:22.265025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.265958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.265981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.266829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.266850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:29:32.584 [2024-10-13 19:59:22.268359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.268411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.268455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.268481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.268508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.268530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.584 [2024-10-13 19:59:22.268555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.584 [2024-10-13 19:59:22.268582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.268629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.268676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.268722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.268783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.268828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.268871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.268916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.268960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.268984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.269965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.269989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.585 [2024-10-13 19:59:22.270559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.585 [2024-10-13 19:59:22.270580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.270626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.270672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.270734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.270778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.270822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.270866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.270912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.270961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.270986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.586 [2024-10-13 19:59:22.271458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.586 [2024-10-13 19:59:22.271480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb380 is same with the state(6) to be set 00:29:32.586 [2024-10-13 19:59:22.276221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:32.586 [2024-10-13 19:59:22.276263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.586 [2024-10-13 19:59:22.276287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:32.586 00:29:32.586 Latency(us) 00:29:32.586 [2024-10-13T17:59:22.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme1n1 ended in about 0.99 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme1n1 : 0.99 146.74 9.17 64.33 0.00 299878.40 17476.27 299815.06 00:29:32.586 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme2n1 ended in about 1.02 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme2n1 : 1.02 136.61 8.54 62.45 0.00 311956.12 35535.08 323116.75 00:29:32.586 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme3n1 ended in about 1.05 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme3n1 : 1.05 125.65 7.85 60.92 0.00 326629.59 23204.60 302921.96 00:29:32.586 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme4n1 ended in about 1.06 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme4n1 : 1.06 125.09 7.82 60.65 0.00 321542.01 30680.56 313796.08 00:29:32.586 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme5n1 ended in about 1.06 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme5n1 : 1.06 120.77 7.55 60.39 0.00 323274.02 26020.22 301368.51 00:29:32.586 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme6n1 ended in about 0.99 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme6n1 : 0.99 194.12 12.13 64.71 0.00 219744.62 6796.33 301368.51 00:29:32.586 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme7n1 ended in about 1.08 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme7n1 : 1.08 118.97 7.44 59.48 0.00 315416.27 43302.31 279620.27 00:29:32.586 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme8n1 ended in about 1.09 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme8n1 : 1.09 117.94 7.37 58.97 0.00 311950.60 21748.24 302921.96 00:29:32.586 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme9n1 ended in about 1.09 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme9n1 : 1.09 117.44 7.34 58.72 0.00 306956.01 23981.32 310689.19 00:29:32.586 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.586 Job: Nvme10n1 ended in about 1.04 seconds with error 00:29:32.586 Verification LBA range: start 0x0 length 0x400 00:29:32.586 Nvme10n1 : 1.04 122.50 7.66 61.25 0.00 285396.64 22719.15 327777.09 00:29:32.586 [2024-10-13T17:59:22.401Z] =================================================================================================================== 00:29:32.586 [2024-10-13T17:59:22.401Z] Total : 1325.82 82.86 611.86 0.00 299784.63 6796.33 327777.09 00:29:32.845 [2024-10-13 19:59:22.364770] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:32.845 [2024-10-13 19:59:22.364887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:32.845 [2024-10-13 19:59:22.365277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.845 [2024-10-13 19:59:22.365327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:32.845 [2024-10-13 19:59:22.365359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:32.845 [2024-10-13 19:59:22.365422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:32.845 [2024-10-13 19:59:22.365463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:32.845 [2024-10-13 19:59:22.365500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:32.845 [2024-10-13 19:59:22.365531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:32.845 [2024-10-13 19:59:22.365634] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.845 [2024-10-13 19:59:22.365670] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.845 [2024-10-13 19:59:22.365698] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.845 [2024-10-13 19:59:22.365725] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.845 [2024-10-13 19:59:22.365754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:32.845 [2024-10-13 19:59:22.366650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.845 [2024-10-13 19:59:22.366690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:32.845 [2024-10-13 19:59:22.366714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:32.845 [2024-10-13 19:59:22.366834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.845 [2024-10-13 19:59:22.366867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:32.845 [2024-10-13 19:59:22.366889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:32.845 [2024-10-13 19:59:22.367024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.845 [2024-10-13 19:59:22.367058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:32.845 [2024-10-13 19:59:22.367081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:32.845 [2024-10-13 19:59:22.367107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:32.845 [2024-10-13 19:59:22.367129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:32.845 [2024-10-13 19:59:22.367155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:32.845 [2024-10-13 19:59:22.367187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:32.845 [2024-10-13 19:59:22.367208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:32.845 [2024-10-13 19:59:22.367228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:32.845 [2024-10-13 19:59:22.367255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:32.845 [2024-10-13 19:59:22.367276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:32.845 [2024-10-13 19:59:22.367309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:32.845 [2024-10-13 19:59:22.367338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:32.845 [2024-10-13 19:59:22.367357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:32.845 [2024-10-13 19:59:22.367375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:32.845 [2024-10-13 19:59:22.367447] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.846 [2024-10-13 19:59:22.367488] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.846 [2024-10-13 19:59:22.367517] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.846 [2024-10-13 19:59:22.367544] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.846 [2024-10-13 19:59:22.367572] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.846 [2024-10-13 19:59:22.367599] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.846 [2024-10-13 19:59:22.367627] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.846 [2024-10-13 19:59:22.368805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.846 [2024-10-13 19:59:22.368847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:32.846 [2024-10-13 19:59:22.368918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.368944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.368962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.368981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.369035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:32.846 [2024-10-13 19:59:22.369068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:32.846 [2024-10-13 19:59:22.369096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:32.846 [2024-10-13 19:59:22.369119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:32.846 [2024-10-13 19:59:22.369138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:32.846 [2024-10-13 19:59:22.369157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:32.846 [2024-10-13 19:59:22.369467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.369626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.846 [2024-10-13 19:59:22.369661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:32.846 [2024-10-13 19:59:22.369686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:32.846 [2024-10-13 19:59:22.369812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.846 [2024-10-13 19:59:22.369846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:32.846 [2024-10-13 19:59:22.369869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:32.846 [2024-10-13 19:59:22.369891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:32.846 [2024-10-13 19:59:22.369910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:32.846 [2024-10-13 19:59:22.369929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:32.846 [2024-10-13 19:59:22.369956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:32.846 [2024-10-13 19:59:22.369979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:32.846 [2024-10-13 19:59:22.370005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:32.846 [2024-10-13 19:59:22.370033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:32.846 [2024-10-13 19:59:22.370054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:32.846 [2024-10-13 19:59:22.370088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:32.846 [2024-10-13 19:59:22.370173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.370201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.370219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.370243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:32.846 [2024-10-13 19:59:22.370273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:32.846 [2024-10-13 19:59:22.370335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.846 [2024-10-13 19:59:22.370360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.846 [2024-10-13 19:59:22.370381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.846 [2024-10-13 19:59:22.370436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:32.846 [2024-10-13 19:59:22.370459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:32.846 [2024-10-13 19:59:22.370479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:32.846 [2024-10-13 19:59:22.370550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.846 [2024-10-13 19:59:22.370576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.375 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3078075 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3078075 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3078075 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.313 rmmod nvme_tcp 00:29:36.313 rmmod nvme_fabrics 00:29:36.313 rmmod nvme_keyring 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3077762 ']' 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3077762 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3077762 ']' 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3077762 00:29:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3077762) - No such process 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3077762 is not found' 00:29:36.313 Process with pid 3077762 is not found 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:29:36.313 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.313 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.313 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.313 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.313 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.218 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.477 00:29:38.477 real 0m11.764s 00:29:38.477 user 0m35.587s 00:29:38.477 sys 0m1.943s 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.477 ************************************ 00:29:38.477 END TEST nvmf_shutdown_tc3 00:29:38.477 ************************************ 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.477 ************************************ 00:29:38.477 START TEST nvmf_shutdown_tc4 00:29:38.477 ************************************ 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:38.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:38.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.477 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:38.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:38.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:29:38.478 00:29:38.478 --- 10.0.0.2 ping statistics --- 00:29:38.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.478 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:29:38.478 00:29:38.478 --- 10.0.0.1 ping statistics --- 00:29:38.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.478 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=3079320 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 3079320 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3079320 ']' 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:38.478 19:59:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:38.737 [2024-10-13 19:59:28.357344] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:38.737 [2024-10-13 19:59:28.357505] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.737 [2024-10-13 19:59:28.495852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.995 [2024-10-13 19:59:28.634504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.995 [2024-10-13 19:59:28.634586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.995 [2024-10-13 19:59:28.634612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.995 [2024-10-13 19:59:28.634637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.995 [2024-10-13 19:59:28.634657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.995 [2024-10-13 19:59:28.637561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.995 [2024-10-13 19:59:28.637675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.995 [2024-10-13 19:59:28.637783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.995 [2024-10-13 19:59:28.637790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:39.561 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:39.561 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:29:39.561 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:39.561 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:39.561 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:39.819 [2024-10-13 19:59:29.396600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.819 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:39.819 Malloc1 00:29:39.819 [2024-10-13 19:59:29.539722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.819 Malloc2 00:29:40.078 Malloc3 00:29:40.078 Malloc4 00:29:40.336 Malloc5 00:29:40.336 Malloc6 00:29:40.336 Malloc7 00:29:40.593 Malloc8 00:29:40.593 Malloc9 00:29:40.851 Malloc10 00:29:40.851 19:59:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.851 19:59:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:40.851 19:59:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.851 19:59:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:40.851 19:59:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3079584 00:29:40.851 19:59:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:40.851 19:59:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:40.851 [2024-10-13 19:59:30.587197] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3079320 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3079320 ']' 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3079320 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3079320 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3079320' 00:29:46.123 killing process with pid 3079320 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3079320 00:29:46.123 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3079320 00:29:46.123 [2024-10-13 19:59:35.525525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.525614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.525639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.525658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.525747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.525770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.526035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.526078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.526101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.526120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.526140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.526159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.526177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.527886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.527931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.527955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.527975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.527994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.528011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.528029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.538281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.538366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.538405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.538466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.538493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.538520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.538540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.538558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 starting I/O failed: -6 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 [2024-10-13 19:59:35.539125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 starting I/O failed: -6 00:29:46.123 [2024-10-13 19:59:35.539169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.539194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 [2024-10-13 19:59:35.539213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 [2024-10-13 19:59:35.539232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.539250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 [2024-10-13 19:59:35.539268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 [2024-10-13 19:59:35.539286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 starting I/O failed: -6 00:29:46.123 [2024-10-13 19:59:35.539304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.539322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.123 [2024-10-13 19:59:35.539341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same Write completed with error (sct=0, sc=8) 00:29:46.123 with the state(6) to be set 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 starting I/O failed: -6 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 starting I/O failed: -6 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.123 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 [2024-10-13 19:59:35.540759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.124 starting I/O failed: -6 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 [2024-10-13 19:59:35.542886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 [2024-10-13 19:59:35.545685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.124 starting I/O failed: -6 00:29:46.124 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 [2024-10-13 19:59:35.555507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.125 NVMe io qpair process completion error 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 [2024-10-13 19:59:35.557416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 [2024-10-13 19:59:35.559500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.125 starting I/O failed: -6 00:29:46.125 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 [2024-10-13 19:59:35.562335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 [2024-10-13 19:59:35.575293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.126 NVMe io qpair process completion error 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 [2024-10-13 19:59:35.577481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 [2024-10-13 19:59:35.579418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.127 starting I/O failed: -6 00:29:46.127 starting I/O failed: -6 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 [2024-10-13 19:59:35.582432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 [2024-10-13 19:59:35.595796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.128 NVMe io qpair process completion error 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 [2024-10-13 19:59:35.597987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 [2024-10-13 19:59:35.599981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 [2024-10-13 19:59:35.602750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 [2024-10-13 19:59:35.612270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.129 NVMe io qpair process completion error 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 [2024-10-13 19:59:35.614058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.129 starting I/O failed: -6 00:29:46.129 starting I/O failed: -6 00:29:46.129 starting I/O failed: -6 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 [2024-10-13 19:59:35.616317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 [2024-10-13 19:59:35.619054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 [2024-10-13 19:59:35.631729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.130 NVMe io qpair process completion error 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 [2024-10-13 19:59:35.651088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.132 NVMe io qpair process completion error 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 [2024-10-13 19:59:35.653308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 [2024-10-13 19:59:35.655563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 [2024-10-13 19:59:35.658217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 [2024-10-13 19:59:35.673848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.133 NVMe io qpair process completion error 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 [2024-10-13 19:59:35.677510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 [2024-10-13 19:59:35.680178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 [2024-10-13 19:59:35.689724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.135 NVMe io qpair process completion error 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 [2024-10-13 19:59:35.691678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 [2024-10-13 19:59:35.693863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 [2024-10-13 19:59:35.696606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 [2024-10-13 19:59:35.705991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.136 NVMe io qpair process completion error 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 [2024-10-13 19:59:35.707949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 [2024-10-13 19:59:35.709890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.136 starting I/O failed: -6 00:29:46.137 starting I/O failed: -6 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 [2024-10-13 19:59:35.712788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.137 starting I/O failed: -6 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 [2024-10-13 19:59:35.728386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.137 NVMe io qpair process completion error 00:29:46.137 Initializing NVMe Controllers 00:29:46.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:46.137 Controller IO queue size 128, less than required. 00:29:46.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:46.138 Controller IO queue size 128, less than required. 00:29:46.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:46.138 Initialization complete. Launching workers. 00:29:46.138 ======================================================== 00:29:46.138 Latency(us) 00:29:46.138 Device Information : IOPS MiB/s Average min max 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1359.26 58.41 94192.70 2175.82 227408.77 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1395.95 59.98 91846.91 1473.05 273268.83 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1397.42 60.05 91889.71 2015.70 252137.09 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1409.23 60.55 87739.72 1642.66 175700.08 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1404.80 60.36 88161.49 857.48 169317.05 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1401.01 60.20 88601.48 1675.92 152940.28 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1355.67 58.25 91779.25 1859.49 162921.84 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1357.78 58.34 91779.08 1507.62 174183.38 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1364.53 58.63 91509.12 1686.72 194635.58 00:29:46.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1356.09 58.27 92294.65 2357.11 206023.96 00:29:46.138 ======================================================== 00:29:46.138 Total : 13801.74 593.04 90955.62 857.48 273268.83 00:29:46.138 00:29:46.138 [2024-10-13 19:59:35.756906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:46.138 [2024-10-13 19:59:35.757743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:46.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:48.728 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3079584 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3079584 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3079584 00:29:49.686 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.687 rmmod nvme_tcp 00:29:49.687 rmmod nvme_fabrics 00:29:49.687 rmmod nvme_keyring 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 3079320 ']' 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 3079320 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3079320 ']' 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3079320 00:29:49.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3079320) - No such process 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3079320 is not found' 00:29:49.687 Process with pid 3079320 is not found 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.687 19:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.591 19:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.591 00:29:51.591 real 0m13.316s 00:29:51.591 user 0m37.821s 00:29:51.591 sys 0m4.959s 00:29:51.591 19:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:51.591 19:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.591 ************************************ 00:29:51.591 END TEST nvmf_shutdown_tc4 00:29:51.591 ************************************ 00:29:51.849 19:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:51.849 00:29:51.849 real 0m55.871s 00:29:51.849 user 2m54.412s 00:29:51.849 sys 0m13.102s 00:29:51.849 19:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:51.849 19:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:51.849 ************************************ 00:29:51.849 END TEST nvmf_shutdown 00:29:51.849 ************************************ 00:29:51.849 19:59:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:51.849 00:29:51.849 real 18m19.800s 00:29:51.849 user 50m50.116s 00:29:51.849 sys 3m28.707s 00:29:51.849 19:59:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:51.849 19:59:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:51.849 ************************************ 00:29:51.849 END TEST nvmf_target_extra 00:29:51.849 ************************************ 00:29:51.849 19:59:41 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:51.849 19:59:41 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:51.849 19:59:41 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:51.849 19:59:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.849 ************************************ 00:29:51.849 START TEST nvmf_host 00:29:51.849 ************************************ 00:29:51.849 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:51.849 * Looking for test storage... 00:29:51.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:51.849 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:51.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.850 --rc genhtml_branch_coverage=1 00:29:51.850 --rc genhtml_function_coverage=1 00:29:51.850 --rc genhtml_legend=1 00:29:51.850 --rc geninfo_all_blocks=1 00:29:51.850 --rc geninfo_unexecuted_blocks=1 00:29:51.850 00:29:51.850 ' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:51.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.850 --rc genhtml_branch_coverage=1 00:29:51.850 --rc genhtml_function_coverage=1 00:29:51.850 --rc genhtml_legend=1 00:29:51.850 --rc geninfo_all_blocks=1 00:29:51.850 --rc geninfo_unexecuted_blocks=1 00:29:51.850 00:29:51.850 ' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:51.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.850 --rc genhtml_branch_coverage=1 00:29:51.850 --rc genhtml_function_coverage=1 00:29:51.850 --rc genhtml_legend=1 00:29:51.850 --rc geninfo_all_blocks=1 00:29:51.850 --rc geninfo_unexecuted_blocks=1 00:29:51.850 00:29:51.850 ' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:51.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.850 --rc genhtml_branch_coverage=1 00:29:51.850 --rc genhtml_function_coverage=1 00:29:51.850 --rc genhtml_legend=1 00:29:51.850 --rc geninfo_all_blocks=1 00:29:51.850 --rc geninfo_unexecuted_blocks=1 00:29:51.850 00:29:51.850 ' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.850 ************************************ 00:29:51.850 START TEST nvmf_multicontroller 00:29:51.850 ************************************ 00:29:51.850 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:52.109 * Looking for test storage... 00:29:52.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.109 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.109 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.109 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.109 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.110 --rc genhtml_branch_coverage=1 00:29:52.110 --rc genhtml_function_coverage=1 00:29:52.110 --rc genhtml_legend=1 00:29:52.110 --rc geninfo_all_blocks=1 00:29:52.110 --rc geninfo_unexecuted_blocks=1 00:29:52.110 00:29:52.110 ' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.110 --rc genhtml_branch_coverage=1 00:29:52.110 --rc genhtml_function_coverage=1 00:29:52.110 --rc genhtml_legend=1 00:29:52.110 --rc geninfo_all_blocks=1 00:29:52.110 --rc geninfo_unexecuted_blocks=1 00:29:52.110 00:29:52.110 ' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.110 --rc genhtml_branch_coverage=1 00:29:52.110 --rc genhtml_function_coverage=1 00:29:52.110 --rc genhtml_legend=1 00:29:52.110 --rc geninfo_all_blocks=1 00:29:52.110 --rc geninfo_unexecuted_blocks=1 00:29:52.110 00:29:52.110 ' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.110 --rc genhtml_branch_coverage=1 00:29:52.110 --rc genhtml_function_coverage=1 00:29:52.110 --rc genhtml_legend=1 00:29:52.110 --rc geninfo_all_blocks=1 00:29:52.110 --rc geninfo_unexecuted_blocks=1 00:29:52.110 00:29:52.110 ' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:52.110 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.111 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.111 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.111 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:52.111 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:52.111 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.111 19:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:54.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:54.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.010 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:54.011 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:54.011 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.011 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.269 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.269 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.269 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.269 19:59:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:29:54.269 00:29:54.269 --- 10.0.0.2 ping statistics --- 00:29:54.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.269 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:54.269 00:29:54.269 --- 10.0.0.1 ping statistics --- 00:29:54.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.269 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=3082633 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 3082633 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3082633 ']' 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.269 19:59:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:54.528 [2024-10-13 19:59:44.144588] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:54.528 [2024-10-13 19:59:44.144759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.528 [2024-10-13 19:59:44.281765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:54.786 [2024-10-13 19:59:44.406283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.786 [2024-10-13 19:59:44.406354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.786 [2024-10-13 19:59:44.406380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.786 [2024-10-13 19:59:44.406416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.786 [2024-10-13 19:59:44.406446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.786 [2024-10-13 19:59:44.409195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.786 [2024-10-13 19:59:44.409288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.786 [2024-10-13 19:59:44.409293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.352 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:55.352 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:55.352 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:55.352 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:55.352 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.611 [2024-10-13 19:59:45.193133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.611 Malloc0 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.611 [2024-10-13 19:59:45.313977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.611 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.612 [2024-10-13 19:59:45.321804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.612 Malloc1 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.612 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3082902 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3082902 /var/tmp/bdevperf.sock 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3082902 ']' 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:55.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:55.870 19:59:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:56.806 NVMe0n1 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.806 1 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:56.806 request: 00:29:56.806 { 00:29:56.806 "name": "NVMe0", 00:29:56.806 "trtype": "tcp", 00:29:56.806 "traddr": "10.0.0.2", 00:29:56.806 "adrfam": "ipv4", 00:29:56.806 "trsvcid": "4420", 00:29:56.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.806 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:56.806 "hostaddr": "10.0.0.1", 00:29:56.806 "prchk_reftag": false, 00:29:56.806 "prchk_guard": false, 00:29:56.806 "hdgst": false, 00:29:56.806 "ddgst": false, 00:29:56.806 "allow_unrecognized_csi": false, 00:29:56.806 "method": "bdev_nvme_attach_controller", 00:29:56.806 "req_id": 1 00:29:56.806 } 00:29:56.806 Got JSON-RPC error response 00:29:56.806 response: 00:29:56.806 { 00:29:56.806 "code": -114, 00:29:56.806 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:56.806 } 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:56.806 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:57.064 request: 00:29:57.064 { 00:29:57.064 "name": "NVMe0", 00:29:57.064 "trtype": "tcp", 00:29:57.064 "traddr": "10.0.0.2", 00:29:57.064 "adrfam": "ipv4", 00:29:57.064 "trsvcid": "4420", 00:29:57.064 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:57.064 "hostaddr": "10.0.0.1", 00:29:57.064 "prchk_reftag": false, 00:29:57.064 "prchk_guard": false, 00:29:57.064 "hdgst": false, 00:29:57.064 "ddgst": false, 00:29:57.064 "allow_unrecognized_csi": false, 00:29:57.064 "method": "bdev_nvme_attach_controller", 00:29:57.064 "req_id": 1 00:29:57.064 } 00:29:57.064 Got JSON-RPC error response 00:29:57.064 response: 00:29:57.064 { 00:29:57.064 "code": -114, 00:29:57.064 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:57.064 } 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:57.064 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:57.065 request: 00:29:57.065 { 00:29:57.065 "name": "NVMe0", 00:29:57.065 "trtype": "tcp", 00:29:57.065 "traddr": "10.0.0.2", 00:29:57.065 "adrfam": "ipv4", 00:29:57.065 "trsvcid": "4420", 00:29:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:57.065 "hostaddr": "10.0.0.1", 00:29:57.065 "prchk_reftag": false, 00:29:57.065 "prchk_guard": false, 00:29:57.065 "hdgst": false, 00:29:57.065 "ddgst": false, 00:29:57.065 "multipath": "disable", 00:29:57.065 "allow_unrecognized_csi": false, 00:29:57.065 "method": "bdev_nvme_attach_controller", 00:29:57.065 "req_id": 1 00:29:57.065 } 00:29:57.065 Got JSON-RPC error response 00:29:57.065 response: 00:29:57.065 { 00:29:57.065 "code": -114, 00:29:57.065 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:57.065 } 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:57.065 request: 00:29:57.065 { 00:29:57.065 "name": "NVMe0", 00:29:57.065 "trtype": "tcp", 00:29:57.065 "traddr": "10.0.0.2", 00:29:57.065 "adrfam": "ipv4", 00:29:57.065 "trsvcid": "4420", 00:29:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:57.065 "hostaddr": "10.0.0.1", 00:29:57.065 "prchk_reftag": false, 00:29:57.065 "prchk_guard": false, 00:29:57.065 "hdgst": false, 00:29:57.065 "ddgst": false, 00:29:57.065 "multipath": "failover", 00:29:57.065 "allow_unrecognized_csi": false, 00:29:57.065 "method": "bdev_nvme_attach_controller", 00:29:57.065 "req_id": 1 00:29:57.065 } 00:29:57.065 Got JSON-RPC error response 00:29:57.065 response: 00:29:57.065 { 00:29:57.065 "code": -114, 00:29:57.065 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:57.065 } 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:57.065 NVMe0n1 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:57.065 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:57.065 19:59:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:58.437 { 00:29:58.437 "results": [ 00:29:58.437 { 00:29:58.437 "job": "NVMe0n1", 00:29:58.437 "core_mask": "0x1", 00:29:58.437 "workload": "write", 00:29:58.437 "status": "finished", 00:29:58.437 "queue_depth": 128, 00:29:58.437 "io_size": 4096, 00:29:58.437 "runtime": 1.008332, 00:29:58.437 "iops": 13492.579824898941, 00:29:58.437 "mibps": 52.70538994101149, 00:29:58.437 "io_failed": 0, 00:29:58.437 "io_timeout": 0, 00:29:58.437 "avg_latency_us": 9456.57308255407, 00:29:58.437 "min_latency_us": 2852.0296296296297, 00:29:58.437 "max_latency_us": 17961.71851851852 00:29:58.437 } 00:29:58.437 ], 00:29:58.437 "core_count": 1 00:29:58.437 } 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3082902 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3082902 ']' 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3082902 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3082902 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3082902' 00:29:58.437 killing process with pid 3082902 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3082902 00:29:58.437 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3082902 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:29:59.371 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:29:59.371 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:59.371 [2024-10-13 19:59:45.513946] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:59.371 [2024-10-13 19:59:45.514101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082902 ] 00:29:59.371 [2024-10-13 19:59:45.647581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.371 [2024-10-13 19:59:45.773943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.371 [2024-10-13 19:59:46.846165] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 17db444d-0bad-4f4e-be92-c9d3a2b4c68a already exists 00:29:59.371 [2024-10-13 19:59:46.846219] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:17db444d-0bad-4f4e-be92-c9d3a2b4c68a alias for bdev NVMe1n1 00:29:59.371 [2024-10-13 19:59:46.846249] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:59.371 Running I/O for 1 seconds... 00:29:59.371 13413.00 IOPS, 52.39 MiB/s 00:29:59.371 Latency(us) 00:29:59.371 [2024-10-13T17:59:49.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.371 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:59.371 NVMe0n1 : 1.01 13492.58 52.71 0.00 0.00 9456.57 2852.03 17961.72 00:29:59.371 [2024-10-13T17:59:49.186Z] =================================================================================================================== 00:29:59.371 [2024-10-13T17:59:49.186Z] Total : 13492.58 52.71 0.00 0.00 9456.57 2852.03 17961.72 00:29:59.371 Received shutdown signal, test time was about 1.000000 seconds 00:29:59.371 00:29:59.372 Latency(us) 00:29:59.372 [2024-10-13T17:59:49.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.372 [2024-10-13T17:59:49.187Z] =================================================================================================================== 00:29:59.372 [2024-10-13T17:59:49.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.372 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.372 rmmod nvme_tcp 00:29:59.372 rmmod nvme_fabrics 00:29:59.372 rmmod nvme_keyring 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 3082633 ']' 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 3082633 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3082633 ']' 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3082633 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:59.372 19:59:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3082633 00:29:59.372 19:59:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:59.372 19:59:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:59.372 19:59:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3082633' 00:29:59.372 killing process with pid 3082633 00:29:59.372 19:59:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3082633 00:29:59.372 19:59:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3082633 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.751 19:59:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.656 19:59:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.656 00:30:02.656 real 0m10.764s 00:30:02.656 user 0m21.835s 00:30:02.656 sys 0m2.677s 00:30:02.656 19:59:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:02.656 19:59:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.656 ************************************ 00:30:02.656 END TEST nvmf_multicontroller 00:30:02.656 ************************************ 00:30:02.656 19:59:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:02.656 19:59:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:02.656 19:59:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:02.656 19:59:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.915 ************************************ 00:30:02.915 START TEST nvmf_aer 00:30:02.915 ************************************ 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:02.915 * Looking for test storage... 00:30:02.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:02.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.915 --rc genhtml_branch_coverage=1 00:30:02.915 --rc genhtml_function_coverage=1 00:30:02.915 --rc genhtml_legend=1 00:30:02.915 --rc geninfo_all_blocks=1 00:30:02.915 --rc geninfo_unexecuted_blocks=1 00:30:02.915 00:30:02.915 ' 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:02.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.915 --rc genhtml_branch_coverage=1 00:30:02.915 --rc genhtml_function_coverage=1 00:30:02.915 --rc genhtml_legend=1 00:30:02.915 --rc geninfo_all_blocks=1 00:30:02.915 --rc geninfo_unexecuted_blocks=1 00:30:02.915 00:30:02.915 ' 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:02.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.915 --rc genhtml_branch_coverage=1 00:30:02.915 --rc genhtml_function_coverage=1 00:30:02.915 --rc genhtml_legend=1 00:30:02.915 --rc geninfo_all_blocks=1 00:30:02.915 --rc geninfo_unexecuted_blocks=1 00:30:02.915 00:30:02.915 ' 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:02.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.915 --rc genhtml_branch_coverage=1 00:30:02.915 --rc genhtml_function_coverage=1 00:30:02.915 --rc genhtml_legend=1 00:30:02.915 --rc geninfo_all_blocks=1 00:30:02.915 --rc geninfo_unexecuted_blocks=1 00:30:02.915 00:30:02.915 ' 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.915 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:02.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:02.916 19:59:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:04.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:04.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:04.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:04.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.817 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:30:05.076 00:30:05.076 --- 10.0.0.2 ping statistics --- 00:30:05.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.076 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:30:05.076 00:30:05.076 --- 10.0.0.1 ping statistics --- 00:30:05.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.076 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3085383 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3085383 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3085383 ']' 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.076 19:59:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:05.076 [2024-10-13 19:59:54.815005] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:30:05.076 [2024-10-13 19:59:54.815149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.335 [2024-10-13 19:59:54.953181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.335 [2024-10-13 19:59:55.098009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.335 [2024-10-13 19:59:55.098102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.335 [2024-10-13 19:59:55.098129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.335 [2024-10-13 19:59:55.098153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.335 [2024-10-13 19:59:55.098173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.335 [2024-10-13 19:59:55.101061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.335 [2024-10-13 19:59:55.101132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.335 [2024-10-13 19:59:55.101237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.335 [2024-10-13 19:59:55.101242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.268 [2024-10-13 19:59:55.891201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.268 Malloc0 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.268 19:59:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.268 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.268 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.268 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.268 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.268 [2024-10-13 19:59:56.008311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.268 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.268 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:06.268 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.268 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.268 [ 00:30:06.268 { 00:30:06.268 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:06.268 "subtype": "Discovery", 00:30:06.268 "listen_addresses": [], 00:30:06.268 "allow_any_host": true, 00:30:06.268 "hosts": [] 00:30:06.268 }, 00:30:06.268 { 00:30:06.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.268 "subtype": "NVMe", 00:30:06.268 "listen_addresses": [ 00:30:06.268 { 00:30:06.268 "trtype": "TCP", 00:30:06.268 "adrfam": "IPv4", 00:30:06.268 "traddr": "10.0.0.2", 00:30:06.268 "trsvcid": "4420" 00:30:06.268 } 00:30:06.268 ], 00:30:06.268 "allow_any_host": true, 00:30:06.268 "hosts": [], 00:30:06.268 "serial_number": "SPDK00000000000001", 00:30:06.269 "model_number": "SPDK bdev Controller", 00:30:06.269 "max_namespaces": 2, 00:30:06.269 "min_cntlid": 1, 00:30:06.269 "max_cntlid": 65519, 00:30:06.269 "namespaces": [ 00:30:06.269 { 00:30:06.269 "nsid": 1, 00:30:06.269 "bdev_name": "Malloc0", 00:30:06.269 "name": "Malloc0", 00:30:06.269 "nguid": "E0CE7CED3775483F914A8C8850E154F2", 00:30:06.269 "uuid": "e0ce7ced-3775-483f-914a-8c8850e154f2" 00:30:06.269 } 00:30:06.269 ] 00:30:06.269 } 00:30:06.269 ] 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3085542 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:06.269 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:06.526 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:06.527 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:30:06.527 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:30:06.527 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.785 Malloc1 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.785 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:07.043 [ 00:30:07.043 { 00:30:07.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:07.043 "subtype": "Discovery", 00:30:07.043 "listen_addresses": [], 00:30:07.043 "allow_any_host": true, 00:30:07.043 "hosts": [] 00:30:07.043 }, 00:30:07.043 { 00:30:07.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.043 "subtype": "NVMe", 00:30:07.043 "listen_addresses": [ 00:30:07.043 { 00:30:07.043 "trtype": "TCP", 00:30:07.043 "adrfam": "IPv4", 00:30:07.043 "traddr": "10.0.0.2", 00:30:07.043 "trsvcid": "4420" 00:30:07.043 } 00:30:07.043 ], 00:30:07.043 "allow_any_host": true, 00:30:07.043 "hosts": [], 00:30:07.043 "serial_number": "SPDK00000000000001", 00:30:07.043 "model_number": "SPDK bdev Controller", 00:30:07.043 "max_namespaces": 2, 00:30:07.043 "min_cntlid": 1, 00:30:07.043 "max_cntlid": 65519, 00:30:07.044 "namespaces": [ 00:30:07.044 { 00:30:07.044 "nsid": 1, 00:30:07.044 "bdev_name": "Malloc0", 00:30:07.044 "name": "Malloc0", 00:30:07.044 "nguid": "E0CE7CED3775483F914A8C8850E154F2", 00:30:07.044 "uuid": "e0ce7ced-3775-483f-914a-8c8850e154f2" 00:30:07.044 }, 00:30:07.044 { 00:30:07.044 "nsid": 2, 00:30:07.044 "bdev_name": "Malloc1", 00:30:07.044 "name": "Malloc1", 00:30:07.044 "nguid": "5157692E448C454D9342BE69FD9942B1", 00:30:07.044 "uuid": "5157692e-448c-454d-9342-be69fd9942b1" 00:30:07.044 } 00:30:07.044 ] 00:30:07.044 } 00:30:07.044 ] 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3085542 00:30:07.044 Asynchronous Event Request test 00:30:07.044 Attaching to 10.0.0.2 00:30:07.044 Attached to 10.0.0.2 00:30:07.044 Registering asynchronous event callbacks... 00:30:07.044 Starting namespace attribute notice tests for all controllers... 00:30:07.044 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:07.044 aer_cb - Changed Namespace 00:30:07.044 Cleaning up... 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.044 19:59:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.302 rmmod nvme_tcp 00:30:07.302 rmmod nvme_fabrics 00:30:07.302 rmmod nvme_keyring 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3085383 ']' 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3085383 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3085383 ']' 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3085383 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3085383 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3085383' 00:30:07.302 killing process with pid 3085383 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3085383 00:30:07.302 19:59:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3085383 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.692 19:59:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.593 00:30:10.593 real 0m7.753s 00:30:10.593 user 0m12.249s 00:30:10.593 sys 0m2.206s 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.593 ************************************ 00:30:10.593 END TEST nvmf_aer 00:30:10.593 ************************************ 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.593 ************************************ 00:30:10.593 START TEST nvmf_async_init 00:30:10.593 ************************************ 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:10.593 * Looking for test storage... 00:30:10.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.593 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:10.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.593 --rc genhtml_branch_coverage=1 00:30:10.593 --rc genhtml_function_coverage=1 00:30:10.593 --rc genhtml_legend=1 00:30:10.593 --rc geninfo_all_blocks=1 00:30:10.593 --rc geninfo_unexecuted_blocks=1 00:30:10.593 00:30:10.593 ' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.852 --rc genhtml_branch_coverage=1 00:30:10.852 --rc genhtml_function_coverage=1 00:30:10.852 --rc genhtml_legend=1 00:30:10.852 --rc geninfo_all_blocks=1 00:30:10.852 --rc geninfo_unexecuted_blocks=1 00:30:10.852 00:30:10.852 ' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.852 --rc genhtml_branch_coverage=1 00:30:10.852 --rc genhtml_function_coverage=1 00:30:10.852 --rc genhtml_legend=1 00:30:10.852 --rc geninfo_all_blocks=1 00:30:10.852 --rc geninfo_unexecuted_blocks=1 00:30:10.852 00:30:10.852 ' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.852 --rc genhtml_branch_coverage=1 00:30:10.852 --rc genhtml_function_coverage=1 00:30:10.852 --rc genhtml_legend=1 00:30:10.852 --rc geninfo_all_blocks=1 00:30:10.852 --rc geninfo_unexecuted_blocks=1 00:30:10.852 00:30:10.852 ' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:10.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0cde939d05184cd4aa7cf7e405112fa8 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.852 20:00:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:12.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:12.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.753 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:12.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:12.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.754 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:13.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:30:13.012 00:30:13.012 --- 10.0.0.2 ping statistics --- 00:30:13.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.012 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:30:13.012 00:30:13.012 --- 10.0.0.1 ping statistics --- 00:30:13.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.012 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3087853 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3087853 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3087853 ']' 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:13.012 20:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.012 [2024-10-13 20:00:02.699891] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:30:13.012 [2024-10-13 20:00:02.700050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.270 [2024-10-13 20:00:02.838099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.270 [2024-10-13 20:00:02.956453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.270 [2024-10-13 20:00:02.956540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.270 [2024-10-13 20:00:02.956563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.270 [2024-10-13 20:00:02.956584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.270 [2024-10-13 20:00:02.956600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.270 [2024-10-13 20:00:02.958044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.205 [2024-10-13 20:00:03.738791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.205 null0 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0cde939d05184cd4aa7cf7e405112fa8 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.205 [2024-10-13 20:00:03.779221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.205 20:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.205 nvme0n1 00:30:14.205 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.205 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:14.205 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.205 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.464 [ 00:30:14.464 { 00:30:14.464 "name": "nvme0n1", 00:30:14.464 "aliases": [ 00:30:14.464 "0cde939d-0518-4cd4-aa7c-f7e405112fa8" 00:30:14.464 ], 00:30:14.464 "product_name": "NVMe disk", 00:30:14.464 "block_size": 512, 00:30:14.464 "num_blocks": 2097152, 00:30:14.464 "uuid": "0cde939d-0518-4cd4-aa7c-f7e405112fa8", 00:30:14.464 "numa_id": 0, 00:30:14.464 "assigned_rate_limits": { 00:30:14.464 "rw_ios_per_sec": 0, 00:30:14.464 "rw_mbytes_per_sec": 0, 00:30:14.464 "r_mbytes_per_sec": 0, 00:30:14.464 "w_mbytes_per_sec": 0 00:30:14.464 }, 00:30:14.464 "claimed": false, 00:30:14.464 "zoned": false, 00:30:14.464 "supported_io_types": { 00:30:14.464 "read": true, 00:30:14.464 "write": true, 00:30:14.464 "unmap": false, 00:30:14.464 "flush": true, 00:30:14.464 "reset": true, 00:30:14.464 "nvme_admin": true, 00:30:14.464 "nvme_io": true, 00:30:14.464 "nvme_io_md": false, 00:30:14.464 "write_zeroes": true, 00:30:14.464 "zcopy": false, 00:30:14.464 "get_zone_info": false, 00:30:14.464 "zone_management": false, 00:30:14.464 "zone_append": false, 00:30:14.464 "compare": true, 00:30:14.464 "compare_and_write": true, 00:30:14.464 "abort": true, 00:30:14.464 "seek_hole": false, 00:30:14.464 "seek_data": false, 00:30:14.464 "copy": true, 00:30:14.464 "nvme_iov_md": false 00:30:14.464 }, 00:30:14.464 "memory_domains": [ 00:30:14.464 { 00:30:14.464 "dma_device_id": "system", 00:30:14.464 "dma_device_type": 1 00:30:14.464 } 00:30:14.464 ], 00:30:14.464 "driver_specific": { 00:30:14.464 "nvme": [ 00:30:14.464 { 00:30:14.464 "trid": { 00:30:14.464 "trtype": "TCP", 00:30:14.464 "adrfam": "IPv4", 00:30:14.464 "traddr": "10.0.0.2", 00:30:14.464 "trsvcid": "4420", 00:30:14.464 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:14.464 }, 00:30:14.464 "ctrlr_data": { 00:30:14.464 "cntlid": 1, 00:30:14.464 "vendor_id": "0x8086", 00:30:14.464 "model_number": "SPDK bdev Controller", 00:30:14.464 "serial_number": "00000000000000000000", 00:30:14.464 "firmware_revision": "25.01", 00:30:14.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:14.464 "oacs": { 00:30:14.464 "security": 0, 00:30:14.464 "format": 0, 00:30:14.464 "firmware": 0, 00:30:14.464 "ns_manage": 0 00:30:14.464 }, 00:30:14.464 "multi_ctrlr": true, 00:30:14.464 "ana_reporting": false 00:30:14.464 }, 00:30:14.464 "vs": { 00:30:14.464 "nvme_version": "1.3" 00:30:14.464 }, 00:30:14.464 "ns_data": { 00:30:14.464 "id": 1, 00:30:14.464 "can_share": true 00:30:14.464 } 00:30:14.464 } 00:30:14.464 ], 00:30:14.464 "mp_policy": "active_passive" 00:30:14.464 } 00:30:14.464 } 00:30:14.464 ] 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.464 [2024-10-13 20:00:04.035949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.464 [2024-10-13 20:00:04.036087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:14.464 [2024-10-13 20:00:04.168783] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.464 [ 00:30:14.464 { 00:30:14.464 "name": "nvme0n1", 00:30:14.464 "aliases": [ 00:30:14.464 "0cde939d-0518-4cd4-aa7c-f7e405112fa8" 00:30:14.464 ], 00:30:14.464 "product_name": "NVMe disk", 00:30:14.464 "block_size": 512, 00:30:14.464 "num_blocks": 2097152, 00:30:14.464 "uuid": "0cde939d-0518-4cd4-aa7c-f7e405112fa8", 00:30:14.464 "numa_id": 0, 00:30:14.464 "assigned_rate_limits": { 00:30:14.464 "rw_ios_per_sec": 0, 00:30:14.464 "rw_mbytes_per_sec": 0, 00:30:14.464 "r_mbytes_per_sec": 0, 00:30:14.464 "w_mbytes_per_sec": 0 00:30:14.464 }, 00:30:14.464 "claimed": false, 00:30:14.464 "zoned": false, 00:30:14.464 "supported_io_types": { 00:30:14.464 "read": true, 00:30:14.464 "write": true, 00:30:14.464 "unmap": false, 00:30:14.464 "flush": true, 00:30:14.464 "reset": true, 00:30:14.464 "nvme_admin": true, 00:30:14.464 "nvme_io": true, 00:30:14.464 "nvme_io_md": false, 00:30:14.464 "write_zeroes": true, 00:30:14.464 "zcopy": false, 00:30:14.464 "get_zone_info": false, 00:30:14.464 "zone_management": false, 00:30:14.464 "zone_append": false, 00:30:14.464 "compare": true, 00:30:14.464 "compare_and_write": true, 00:30:14.464 "abort": true, 00:30:14.464 "seek_hole": false, 00:30:14.464 "seek_data": false, 00:30:14.464 "copy": true, 00:30:14.464 "nvme_iov_md": false 00:30:14.464 }, 00:30:14.464 "memory_domains": [ 00:30:14.464 { 00:30:14.464 "dma_device_id": "system", 00:30:14.464 "dma_device_type": 1 00:30:14.464 } 00:30:14.464 ], 00:30:14.464 "driver_specific": { 00:30:14.464 "nvme": [ 00:30:14.464 { 00:30:14.464 "trid": { 00:30:14.464 "trtype": "TCP", 00:30:14.464 "adrfam": "IPv4", 00:30:14.464 "traddr": "10.0.0.2", 00:30:14.464 "trsvcid": "4420", 00:30:14.464 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:14.464 }, 00:30:14.464 "ctrlr_data": { 00:30:14.464 "cntlid": 2, 00:30:14.464 "vendor_id": "0x8086", 00:30:14.464 "model_number": "SPDK bdev Controller", 00:30:14.464 "serial_number": "00000000000000000000", 00:30:14.464 "firmware_revision": "25.01", 00:30:14.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:14.464 "oacs": { 00:30:14.464 "security": 0, 00:30:14.464 "format": 0, 00:30:14.464 "firmware": 0, 00:30:14.464 "ns_manage": 0 00:30:14.464 }, 00:30:14.464 "multi_ctrlr": true, 00:30:14.464 "ana_reporting": false 00:30:14.464 }, 00:30:14.464 "vs": { 00:30:14.464 "nvme_version": "1.3" 00:30:14.464 }, 00:30:14.464 "ns_data": { 00:30:14.464 "id": 1, 00:30:14.464 "can_share": true 00:30:14.464 } 00:30:14.464 } 00:30:14.464 ], 00:30:14.464 "mp_policy": "active_passive" 00:30:14.464 } 00:30:14.464 } 00:30:14.464 ] 00:30:14.464 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UilIucmEIy 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UilIucmEIy 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.UilIucmEIy 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.465 [2024-10-13 20:00:04.228696] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:14.465 [2024-10-13 20:00:04.229013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.465 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.465 [2024-10-13 20:00:04.244716] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:14.723 nvme0n1 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.723 [ 00:30:14.723 { 00:30:14.723 "name": "nvme0n1", 00:30:14.723 "aliases": [ 00:30:14.723 "0cde939d-0518-4cd4-aa7c-f7e405112fa8" 00:30:14.723 ], 00:30:14.723 "product_name": "NVMe disk", 00:30:14.723 "block_size": 512, 00:30:14.723 "num_blocks": 2097152, 00:30:14.723 "uuid": "0cde939d-0518-4cd4-aa7c-f7e405112fa8", 00:30:14.723 "numa_id": 0, 00:30:14.723 "assigned_rate_limits": { 00:30:14.723 "rw_ios_per_sec": 0, 00:30:14.723 "rw_mbytes_per_sec": 0, 00:30:14.723 "r_mbytes_per_sec": 0, 00:30:14.723 "w_mbytes_per_sec": 0 00:30:14.723 }, 00:30:14.723 "claimed": false, 00:30:14.723 "zoned": false, 00:30:14.723 "supported_io_types": { 00:30:14.723 "read": true, 00:30:14.723 "write": true, 00:30:14.723 "unmap": false, 00:30:14.723 "flush": true, 00:30:14.723 "reset": true, 00:30:14.723 "nvme_admin": true, 00:30:14.723 "nvme_io": true, 00:30:14.723 "nvme_io_md": false, 00:30:14.723 "write_zeroes": true, 00:30:14.723 "zcopy": false, 00:30:14.723 "get_zone_info": false, 00:30:14.723 "zone_management": false, 00:30:14.723 "zone_append": false, 00:30:14.723 "compare": true, 00:30:14.723 "compare_and_write": true, 00:30:14.723 "abort": true, 00:30:14.723 "seek_hole": false, 00:30:14.723 "seek_data": false, 00:30:14.723 "copy": true, 00:30:14.723 "nvme_iov_md": false 00:30:14.723 }, 00:30:14.723 "memory_domains": [ 00:30:14.723 { 00:30:14.723 "dma_device_id": "system", 00:30:14.723 "dma_device_type": 1 00:30:14.723 } 00:30:14.723 ], 00:30:14.723 "driver_specific": { 00:30:14.723 "nvme": [ 00:30:14.723 { 00:30:14.723 "trid": { 00:30:14.723 "trtype": "TCP", 00:30:14.723 "adrfam": "IPv4", 00:30:14.723 "traddr": "10.0.0.2", 00:30:14.723 "trsvcid": "4421", 00:30:14.723 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:14.723 }, 00:30:14.723 "ctrlr_data": { 00:30:14.723 "cntlid": 3, 00:30:14.723 "vendor_id": "0x8086", 00:30:14.723 "model_number": "SPDK bdev Controller", 00:30:14.723 "serial_number": "00000000000000000000", 00:30:14.723 "firmware_revision": "25.01", 00:30:14.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:14.723 "oacs": { 00:30:14.723 "security": 0, 00:30:14.723 "format": 0, 00:30:14.723 "firmware": 0, 00:30:14.723 "ns_manage": 0 00:30:14.723 }, 00:30:14.723 "multi_ctrlr": true, 00:30:14.723 "ana_reporting": false 00:30:14.723 }, 00:30:14.723 "vs": { 00:30:14.723 "nvme_version": "1.3" 00:30:14.723 }, 00:30:14.723 "ns_data": { 00:30:14.723 "id": 1, 00:30:14.723 "can_share": true 00:30:14.723 } 00:30:14.723 } 00:30:14.723 ], 00:30:14.723 "mp_policy": "active_passive" 00:30:14.723 } 00:30:14.723 } 00:30:14.723 ] 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.UilIucmEIy 00:30:14.723 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.724 rmmod nvme_tcp 00:30:14.724 rmmod nvme_fabrics 00:30:14.724 rmmod nvme_keyring 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3087853 ']' 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3087853 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3087853 ']' 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3087853 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3087853 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3087853' 00:30:14.724 killing process with pid 3087853 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3087853 00:30:14.724 20:00:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3087853 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.098 20:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.998 00:30:17.998 real 0m7.388s 00:30:17.998 user 0m3.973s 00:30:17.998 sys 0m2.133s 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:17.998 ************************************ 00:30:17.998 END TEST nvmf_async_init 00:30:17.998 ************************************ 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.998 ************************************ 00:30:17.998 START TEST dma 00:30:17.998 ************************************ 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:17.998 * Looking for test storage... 00:30:17.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:30:17.998 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:18.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.257 --rc genhtml_branch_coverage=1 00:30:18.257 --rc genhtml_function_coverage=1 00:30:18.257 --rc genhtml_legend=1 00:30:18.257 --rc geninfo_all_blocks=1 00:30:18.257 --rc geninfo_unexecuted_blocks=1 00:30:18.257 00:30:18.257 ' 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:18.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.257 --rc genhtml_branch_coverage=1 00:30:18.257 --rc genhtml_function_coverage=1 00:30:18.257 --rc genhtml_legend=1 00:30:18.257 --rc geninfo_all_blocks=1 00:30:18.257 --rc geninfo_unexecuted_blocks=1 00:30:18.257 00:30:18.257 ' 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:18.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.257 --rc genhtml_branch_coverage=1 00:30:18.257 --rc genhtml_function_coverage=1 00:30:18.257 --rc genhtml_legend=1 00:30:18.257 --rc geninfo_all_blocks=1 00:30:18.257 --rc geninfo_unexecuted_blocks=1 00:30:18.257 00:30:18.257 ' 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:18.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.257 --rc genhtml_branch_coverage=1 00:30:18.257 --rc genhtml_function_coverage=1 00:30:18.257 --rc genhtml_legend=1 00:30:18.257 --rc geninfo_all_blocks=1 00:30:18.257 --rc geninfo_unexecuted_blocks=1 00:30:18.257 00:30:18.257 ' 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.257 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:18.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:18.258 00:30:18.258 real 0m0.174s 00:30:18.258 user 0m0.112s 00:30:18.258 sys 0m0.070s 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:18.258 ************************************ 00:30:18.258 END TEST dma 00:30:18.258 ************************************ 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.258 ************************************ 00:30:18.258 START TEST nvmf_identify 00:30:18.258 ************************************ 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:18.258 * Looking for test storage... 00:30:18.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:30:18.258 20:00:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.258 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:18.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.516 --rc genhtml_branch_coverage=1 00:30:18.516 --rc genhtml_function_coverage=1 00:30:18.516 --rc genhtml_legend=1 00:30:18.516 --rc geninfo_all_blocks=1 00:30:18.516 --rc geninfo_unexecuted_blocks=1 00:30:18.516 00:30:18.516 ' 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:18.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.516 --rc genhtml_branch_coverage=1 00:30:18.516 --rc genhtml_function_coverage=1 00:30:18.516 --rc genhtml_legend=1 00:30:18.516 --rc geninfo_all_blocks=1 00:30:18.516 --rc geninfo_unexecuted_blocks=1 00:30:18.516 00:30:18.516 ' 00:30:18.516 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:18.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.516 --rc genhtml_branch_coverage=1 00:30:18.516 --rc genhtml_function_coverage=1 00:30:18.516 --rc genhtml_legend=1 00:30:18.516 --rc geninfo_all_blocks=1 00:30:18.516 --rc geninfo_unexecuted_blocks=1 00:30:18.516 00:30:18.517 ' 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:18.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.517 --rc genhtml_branch_coverage=1 00:30:18.517 --rc genhtml_function_coverage=1 00:30:18.517 --rc genhtml_legend=1 00:30:18.517 --rc geninfo_all_blocks=1 00:30:18.517 --rc geninfo_unexecuted_blocks=1 00:30:18.517 00:30:18.517 ' 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:18.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.517 20:00:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.417 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:20.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:20.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:20.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:20.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.418 20:00:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:30:20.418 00:30:20.418 --- 10.0.0.2 ping statistics --- 00:30:20.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.418 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:30:20.418 00:30:20.418 --- 10.0.0.1 ping statistics --- 00:30:20.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.418 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3090641 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3090641 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3090641 ']' 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.418 20:00:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:20.418 [2024-10-13 20:00:10.199100] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:30:20.418 [2024-10-13 20:00:10.199233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.677 [2024-10-13 20:00:10.340100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:20.677 [2024-10-13 20:00:10.486886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.677 [2024-10-13 20:00:10.486967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.677 [2024-10-13 20:00:10.486993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.677 [2024-10-13 20:00:10.487018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.677 [2024-10-13 20:00:10.487038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.677 [2024-10-13 20:00:10.490029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.677 [2024-10-13 20:00:10.490105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.677 [2024-10-13 20:00:10.490194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.677 [2024-10-13 20:00:10.490198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.615 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:21.615 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:30:21.615 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.615 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.615 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.615 [2024-10-13 20:00:11.219504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.615 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.615 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.616 Malloc0 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.616 [2024-10-13 20:00:11.372468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.616 [ 00:30:21.616 { 00:30:21.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:21.616 "subtype": "Discovery", 00:30:21.616 "listen_addresses": [ 00:30:21.616 { 00:30:21.616 "trtype": "TCP", 00:30:21.616 "adrfam": "IPv4", 00:30:21.616 "traddr": "10.0.0.2", 00:30:21.616 "trsvcid": "4420" 00:30:21.616 } 00:30:21.616 ], 00:30:21.616 "allow_any_host": true, 00:30:21.616 "hosts": [] 00:30:21.616 }, 00:30:21.616 { 00:30:21.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.616 "subtype": "NVMe", 00:30:21.616 "listen_addresses": [ 00:30:21.616 { 00:30:21.616 "trtype": "TCP", 00:30:21.616 "adrfam": "IPv4", 00:30:21.616 "traddr": "10.0.0.2", 00:30:21.616 "trsvcid": "4420" 00:30:21.616 } 00:30:21.616 ], 00:30:21.616 "allow_any_host": true, 00:30:21.616 "hosts": [], 00:30:21.616 "serial_number": "SPDK00000000000001", 00:30:21.616 "model_number": "SPDK bdev Controller", 00:30:21.616 "max_namespaces": 32, 00:30:21.616 "min_cntlid": 1, 00:30:21.616 "max_cntlid": 65519, 00:30:21.616 "namespaces": [ 00:30:21.616 { 00:30:21.616 "nsid": 1, 00:30:21.616 "bdev_name": "Malloc0", 00:30:21.616 "name": "Malloc0", 00:30:21.616 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:21.616 "eui64": "ABCDEF0123456789", 00:30:21.616 "uuid": "36772bc5-6686-449d-9bb3-e4a5842b9a87" 00:30:21.616 } 00:30:21.616 ] 00:30:21.616 } 00:30:21.616 ] 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.616 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:21.878 [2024-10-13 20:00:11.445522] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:30:21.878 [2024-10-13 20:00:11.445631] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090798 ] 00:30:21.878 [2024-10-13 20:00:11.512347] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:21.878 [2024-10-13 20:00:11.516523] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:21.878 [2024-10-13 20:00:11.516549] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:21.878 [2024-10-13 20:00:11.516591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:21.878 [2024-10-13 20:00:11.516623] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:21.878 [2024-10-13 20:00:11.517486] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:21.878 [2024-10-13 20:00:11.517575] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:21.878 [2024-10-13 20:00:11.531421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:21.878 [2024-10-13 20:00:11.531457] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:21.878 [2024-10-13 20:00:11.531474] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:21.878 [2024-10-13 20:00:11.531485] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:21.878 [2024-10-13 20:00:11.531568] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.531592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.531607] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.878 [2024-10-13 20:00:11.531651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:21.878 [2024-10-13 20:00:11.531704] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.878 [2024-10-13 20:00:11.538415] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.878 [2024-10-13 20:00:11.538451] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.878 [2024-10-13 20:00:11.538466] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.538488] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.878 [2024-10-13 20:00:11.538528] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:21.878 [2024-10-13 20:00:11.538556] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:21.878 [2024-10-13 20:00:11.538573] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:21.878 [2024-10-13 20:00:11.538605] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.538621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.538639] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.878 [2024-10-13 20:00:11.538662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-13 20:00:11.538699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.878 [2024-10-13 20:00:11.538858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.878 [2024-10-13 20:00:11.538882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.878 [2024-10-13 20:00:11.538896] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.538908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.878 [2024-10-13 20:00:11.538932] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:21.878 [2024-10-13 20:00:11.538956] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:21.878 [2024-10-13 20:00:11.538985] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.538999] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.539011] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.878 [2024-10-13 20:00:11.539044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-13 20:00:11.539084] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.878 [2024-10-13 20:00:11.539197] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.878 [2024-10-13 20:00:11.539220] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.878 [2024-10-13 20:00:11.539232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.878 [2024-10-13 20:00:11.539244] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.878 [2024-10-13 20:00:11.539268] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:21.879 [2024-10-13 20:00:11.539297] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:21.879 [2024-10-13 20:00:11.539318] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.539336] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.539347] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.539376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-13 20:00:11.539417] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.879 [2024-10-13 20:00:11.539530] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.879 [2024-10-13 20:00:11.539552] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.879 [2024-10-13 20:00:11.539564] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.539580] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.879 [2024-10-13 20:00:11.539598] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:21.879 [2024-10-13 20:00:11.539626] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.539643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.539655] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.539674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-13 20:00:11.539716] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.879 [2024-10-13 20:00:11.539826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.879 [2024-10-13 20:00:11.539848] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.879 [2024-10-13 20:00:11.539860] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.539871] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.879 [2024-10-13 20:00:11.539888] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:21.879 [2024-10-13 20:00:11.539903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:21.879 [2024-10-13 20:00:11.539936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:21.879 [2024-10-13 20:00:11.540055] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:21.879 [2024-10-13 20:00:11.540070] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:21.879 [2024-10-13 20:00:11.540096] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.540110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.540122] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.540142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-13 20:00:11.540176] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.879 [2024-10-13 20:00:11.540290] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.879 [2024-10-13 20:00:11.540312] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.879 [2024-10-13 20:00:11.540325] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.540336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.879 [2024-10-13 20:00:11.540351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:21.879 [2024-10-13 20:00:11.540379] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.540410] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.540423] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.540443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-13 20:00:11.540476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.879 [2024-10-13 20:00:11.540582] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.879 [2024-10-13 20:00:11.540608] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.879 [2024-10-13 20:00:11.540621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.540633] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.879 [2024-10-13 20:00:11.540647] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:21.879 [2024-10-13 20:00:11.540662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:21.879 [2024-10-13 20:00:11.540685] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:21.879 [2024-10-13 20:00:11.540709] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:21.879 [2024-10-13 20:00:11.540744] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.540761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.540788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-13 20:00:11.540821] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.879 [2024-10-13 20:00:11.541000] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.879 [2024-10-13 20:00:11.541028] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.879 [2024-10-13 20:00:11.541041] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541054] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:21.879 [2024-10-13 20:00:11.541069] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:21.879 [2024-10-13 20:00:11.541084] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541104] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541119] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.879 [2024-10-13 20:00:11.541169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.879 [2024-10-13 20:00:11.541180] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.879 [2024-10-13 20:00:11.541217] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:21.879 [2024-10-13 20:00:11.541233] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:21.879 [2024-10-13 20:00:11.541247] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:21.879 [2024-10-13 20:00:11.541269] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:21.879 [2024-10-13 20:00:11.541283] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:21.879 [2024-10-13 20:00:11.541303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:21.879 [2024-10-13 20:00:11.541346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:21.879 [2024-10-13 20:00:11.541369] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541389] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541431] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.541454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:21.879 [2024-10-13 20:00:11.541489] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.879 [2024-10-13 20:00:11.541625] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.879 [2024-10-13 20:00:11.541647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.879 [2024-10-13 20:00:11.541659] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541671] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.879 [2024-10-13 20:00:11.541699] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541721] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541733] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.541757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.879 [2024-10-13 20:00:11.541776] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541788] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.541819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.879 [2024-10-13 20:00:11.541838] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541850] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541861] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.541878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.879 [2024-10-13 20:00:11.541910] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.541932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.541948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.879 [2024-10-13 20:00:11.541962] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:21.879 [2024-10-13 20:00:11.542014] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:21.879 [2024-10-13 20:00:11.542037] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.879 [2024-10-13 20:00:11.542055] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:21.879 [2024-10-13 20:00:11.542076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-13 20:00:11.542111] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:21.880 [2024-10-13 20:00:11.542138] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:21.880 [2024-10-13 20:00:11.542154] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:21.880 [2024-10-13 20:00:11.542166] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.880 [2024-10-13 20:00:11.542183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:21.880 [2024-10-13 20:00:11.542322] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.880 [2024-10-13 20:00:11.542344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.880 [2024-10-13 20:00:11.542356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.542367] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:21.880 [2024-10-13 20:00:11.542384] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:21.880 [2024-10-13 20:00:11.546440] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:21.880 [2024-10-13 20:00:11.546478] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.546519] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:21.880 [2024-10-13 20:00:11.546557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-13 20:00:11.546594] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:21.880 [2024-10-13 20:00:11.546749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.880 [2024-10-13 20:00:11.546784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.880 [2024-10-13 20:00:11.546803] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.546816] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:21.880 [2024-10-13 20:00:11.546830] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:21.880 [2024-10-13 20:00:11.546851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.546881] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.546897] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.546916] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.880 [2024-10-13 20:00:11.546934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.880 [2024-10-13 20:00:11.546945] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.546962] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:21.880 [2024-10-13 20:00:11.547008] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:21.880 [2024-10-13 20:00:11.547078] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.547096] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:21.880 [2024-10-13 20:00:11.547122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-13 20:00:11.547149] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.547174] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.547186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:21.880 [2024-10-13 20:00:11.547203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.880 [2024-10-13 20:00:11.547239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:21.880 [2024-10-13 20:00:11.547271] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:21.880 [2024-10-13 20:00:11.547513] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.880 [2024-10-13 20:00:11.547541] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.880 [2024-10-13 20:00:11.547555] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.547567] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:21.880 [2024-10-13 20:00:11.547580] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:21.880 [2024-10-13 20:00:11.547593] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.547622] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.547637] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.547657] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.880 [2024-10-13 20:00:11.547674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.880 [2024-10-13 20:00:11.547686] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.547698] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:21.880 [2024-10-13 20:00:11.593420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.880 [2024-10-13 20:00:11.593465] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.880 [2024-10-13 20:00:11.593479] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.593493] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:21.880 [2024-10-13 20:00:11.593553] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.593573] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:21.880 [2024-10-13 20:00:11.593600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-13 20:00:11.593649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:21.880 [2024-10-13 20:00:11.593849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.880 [2024-10-13 20:00:11.593878] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.880 [2024-10-13 20:00:11.593892] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.593903] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:21.880 [2024-10-13 20:00:11.593916] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:21.880 [2024-10-13 20:00:11.593929] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.593949] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.593962] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.593982] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.880 [2024-10-13 20:00:11.593999] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.880 [2024-10-13 20:00:11.594024] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.594036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:21.880 [2024-10-13 20:00:11.594067] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.594084] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:21.880 [2024-10-13 20:00:11.594105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-13 20:00:11.594155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:21.880 [2024-10-13 20:00:11.594311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.880 [2024-10-13 20:00:11.594340] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.880 [2024-10-13 20:00:11.594353] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.594364] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:21.880 [2024-10-13 20:00:11.594378] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:21.880 [2024-10-13 20:00:11.594390] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.594423] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.594438] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.634520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.880 [2024-10-13 20:00:11.634567] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.880 [2024-10-13 20:00:11.634581] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.880 [2024-10-13 20:00:11.634595] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:21.880 ===================================================== 00:30:21.880 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:21.880 ===================================================== 00:30:21.880 Controller Capabilities/Features 00:30:21.880 ================================ 00:30:21.880 Vendor ID: 0000 00:30:21.880 Subsystem Vendor ID: 0000 00:30:21.880 Serial Number: .................... 00:30:21.880 Model Number: ........................................ 00:30:21.880 Firmware Version: 25.01 00:30:21.880 Recommended Arb Burst: 0 00:30:21.880 IEEE OUI Identifier: 00 00 00 00:30:21.880 Multi-path I/O 00:30:21.880 May have multiple subsystem ports: No 00:30:21.880 May have multiple controllers: No 00:30:21.880 Associated with SR-IOV VF: No 00:30:21.880 Max Data Transfer Size: 131072 00:30:21.880 Max Number of Namespaces: 0 00:30:21.880 Max Number of I/O Queues: 1024 00:30:21.880 NVMe Specification Version (VS): 1.3 00:30:21.880 NVMe Specification Version (Identify): 1.3 00:30:21.880 Maximum Queue Entries: 128 00:30:21.880 Contiguous Queues Required: Yes 00:30:21.880 Arbitration Mechanisms Supported 00:30:21.880 Weighted Round Robin: Not Supported 00:30:21.880 Vendor Specific: Not Supported 00:30:21.880 Reset Timeout: 15000 ms 00:30:21.880 Doorbell Stride: 4 bytes 00:30:21.880 NVM Subsystem Reset: Not Supported 00:30:21.880 Command Sets Supported 00:30:21.880 NVM Command Set: Supported 00:30:21.880 Boot Partition: Not Supported 00:30:21.880 Memory Page Size Minimum: 4096 bytes 00:30:21.880 Memory Page Size Maximum: 4096 bytes 00:30:21.880 Persistent Memory Region: Not Supported 00:30:21.880 Optional Asynchronous Events Supported 00:30:21.880 Namespace Attribute Notices: Not Supported 00:30:21.880 Firmware Activation Notices: Not Supported 00:30:21.880 ANA Change Notices: Not Supported 00:30:21.880 PLE Aggregate Log Change Notices: Not Supported 00:30:21.880 LBA Status Info Alert Notices: Not Supported 00:30:21.880 EGE Aggregate Log Change Notices: Not Supported 00:30:21.880 Normal NVM Subsystem Shutdown event: Not Supported 00:30:21.881 Zone Descriptor Change Notices: Not Supported 00:30:21.881 Discovery Log Change Notices: Supported 00:30:21.881 Controller Attributes 00:30:21.881 128-bit Host Identifier: Not Supported 00:30:21.881 Non-Operational Permissive Mode: Not Supported 00:30:21.881 NVM Sets: Not Supported 00:30:21.881 Read Recovery Levels: Not Supported 00:30:21.881 Endurance Groups: Not Supported 00:30:21.881 Predictable Latency Mode: Not Supported 00:30:21.881 Traffic Based Keep ALive: Not Supported 00:30:21.881 Namespace Granularity: Not Supported 00:30:21.881 SQ Associations: Not Supported 00:30:21.881 UUID List: Not Supported 00:30:21.881 Multi-Domain Subsystem: Not Supported 00:30:21.881 Fixed Capacity Management: Not Supported 00:30:21.881 Variable Capacity Management: Not Supported 00:30:21.881 Delete Endurance Group: Not Supported 00:30:21.881 Delete NVM Set: Not Supported 00:30:21.881 Extended LBA Formats Supported: Not Supported 00:30:21.881 Flexible Data Placement Supported: Not Supported 00:30:21.881 00:30:21.881 Controller Memory Buffer Support 00:30:21.881 ================================ 00:30:21.881 Supported: No 00:30:21.881 00:30:21.881 Persistent Memory Region Support 00:30:21.881 ================================ 00:30:21.881 Supported: No 00:30:21.881 00:30:21.881 Admin Command Set Attributes 00:30:21.881 ============================ 00:30:21.881 Security Send/Receive: Not Supported 00:30:21.881 Format NVM: Not Supported 00:30:21.881 Firmware Activate/Download: Not Supported 00:30:21.881 Namespace Management: Not Supported 00:30:21.881 Device Self-Test: Not Supported 00:30:21.881 Directives: Not Supported 00:30:21.881 NVMe-MI: Not Supported 00:30:21.881 Virtualization Management: Not Supported 00:30:21.881 Doorbell Buffer Config: Not Supported 00:30:21.881 Get LBA Status Capability: Not Supported 00:30:21.881 Command & Feature Lockdown Capability: Not Supported 00:30:21.881 Abort Command Limit: 1 00:30:21.881 Async Event Request Limit: 4 00:30:21.881 Number of Firmware Slots: N/A 00:30:21.881 Firmware Slot 1 Read-Only: N/A 00:30:21.881 Firmware Activation Without Reset: N/A 00:30:21.881 Multiple Update Detection Support: N/A 00:30:21.881 Firmware Update Granularity: No Information Provided 00:30:21.881 Per-Namespace SMART Log: No 00:30:21.881 Asymmetric Namespace Access Log Page: Not Supported 00:30:21.881 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:21.881 Command Effects Log Page: Not Supported 00:30:21.881 Get Log Page Extended Data: Supported 00:30:21.881 Telemetry Log Pages: Not Supported 00:30:21.881 Persistent Event Log Pages: Not Supported 00:30:21.881 Supported Log Pages Log Page: May Support 00:30:21.881 Commands Supported & Effects Log Page: Not Supported 00:30:21.881 Feature Identifiers & Effects Log Page:May Support 00:30:21.881 NVMe-MI Commands & Effects Log Page: May Support 00:30:21.881 Data Area 4 for Telemetry Log: Not Supported 00:30:21.881 Error Log Page Entries Supported: 128 00:30:21.881 Keep Alive: Not Supported 00:30:21.881 00:30:21.881 NVM Command Set Attributes 00:30:21.881 ========================== 00:30:21.881 Submission Queue Entry Size 00:30:21.881 Max: 1 00:30:21.881 Min: 1 00:30:21.881 Completion Queue Entry Size 00:30:21.881 Max: 1 00:30:21.881 Min: 1 00:30:21.881 Number of Namespaces: 0 00:30:21.881 Compare Command: Not Supported 00:30:21.881 Write Uncorrectable Command: Not Supported 00:30:21.881 Dataset Management Command: Not Supported 00:30:21.881 Write Zeroes Command: Not Supported 00:30:21.881 Set Features Save Field: Not Supported 00:30:21.881 Reservations: Not Supported 00:30:21.881 Timestamp: Not Supported 00:30:21.881 Copy: Not Supported 00:30:21.881 Volatile Write Cache: Not Present 00:30:21.881 Atomic Write Unit (Normal): 1 00:30:21.881 Atomic Write Unit (PFail): 1 00:30:21.881 Atomic Compare & Write Unit: 1 00:30:21.881 Fused Compare & Write: Supported 00:30:21.881 Scatter-Gather List 00:30:21.881 SGL Command Set: Supported 00:30:21.881 SGL Keyed: Supported 00:30:21.881 SGL Bit Bucket Descriptor: Not Supported 00:30:21.881 SGL Metadata Pointer: Not Supported 00:30:21.881 Oversized SGL: Not Supported 00:30:21.881 SGL Metadata Address: Not Supported 00:30:21.881 SGL Offset: Supported 00:30:21.881 Transport SGL Data Block: Not Supported 00:30:21.881 Replay Protected Memory Block: Not Supported 00:30:21.881 00:30:21.881 Firmware Slot Information 00:30:21.881 ========================= 00:30:21.881 Active slot: 0 00:30:21.881 00:30:21.881 00:30:21.881 Error Log 00:30:21.881 ========= 00:30:21.881 00:30:21.881 Active Namespaces 00:30:21.881 ================= 00:30:21.881 Discovery Log Page 00:30:21.881 ================== 00:30:21.881 Generation Counter: 2 00:30:21.881 Number of Records: 2 00:30:21.881 Record Format: 0 00:30:21.881 00:30:21.881 Discovery Log Entry 0 00:30:21.881 ---------------------- 00:30:21.881 Transport Type: 3 (TCP) 00:30:21.881 Address Family: 1 (IPv4) 00:30:21.881 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:21.881 Entry Flags: 00:30:21.881 Duplicate Returned Information: 1 00:30:21.881 Explicit Persistent Connection Support for Discovery: 1 00:30:21.881 Transport Requirements: 00:30:21.881 Secure Channel: Not Required 00:30:21.881 Port ID: 0 (0x0000) 00:30:21.881 Controller ID: 65535 (0xffff) 00:30:21.881 Admin Max SQ Size: 128 00:30:21.881 Transport Service Identifier: 4420 00:30:21.881 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:21.881 Transport Address: 10.0.0.2 00:30:21.881 Discovery Log Entry 1 00:30:21.881 ---------------------- 00:30:21.881 Transport Type: 3 (TCP) 00:30:21.881 Address Family: 1 (IPv4) 00:30:21.881 Subsystem Type: 2 (NVM Subsystem) 00:30:21.881 Entry Flags: 00:30:21.881 Duplicate Returned Information: 0 00:30:21.881 Explicit Persistent Connection Support for Discovery: 0 00:30:21.881 Transport Requirements: 00:30:21.881 Secure Channel: Not Required 00:30:21.881 Port ID: 0 (0x0000) 00:30:21.881 Controller ID: 65535 (0xffff) 00:30:21.881 Admin Max SQ Size: 128 00:30:21.881 Transport Service Identifier: 4420 00:30:21.881 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:21.881 Transport Address: 10.0.0.2 [2024-10-13 20:00:11.634809] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:21.881 [2024-10-13 20:00:11.634844] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:21.881 [2024-10-13 20:00:11.634869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.881 [2024-10-13 20:00:11.634886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:21.881 [2024-10-13 20:00:11.634900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.881 [2024-10-13 20:00:11.634913] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:21.881 [2024-10-13 20:00:11.634927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.881 [2024-10-13 20:00:11.634940] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.881 [2024-10-13 20:00:11.634953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.881 [2024-10-13 20:00:11.634991] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.881 [2024-10-13 20:00:11.635008] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.881 [2024-10-13 20:00:11.635020] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.881 [2024-10-13 20:00:11.635044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.881 [2024-10-13 20:00:11.635085] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.881 [2024-10-13 20:00:11.635226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.881 [2024-10-13 20:00:11.635257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.881 [2024-10-13 20:00:11.635271] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.881 [2024-10-13 20:00:11.635286] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.881 [2024-10-13 20:00:11.635312] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.881 [2024-10-13 20:00:11.635327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.881 [2024-10-13 20:00:11.635340] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.881 [2024-10-13 20:00:11.635366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.881 [2024-10-13 20:00:11.635419] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.881 [2024-10-13 20:00:11.635584] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.881 [2024-10-13 20:00:11.635606] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.881 [2024-10-13 20:00:11.635619] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.881 [2024-10-13 20:00:11.635631] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.881 [2024-10-13 20:00:11.635653] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:21.881 [2024-10-13 20:00:11.635674] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:21.881 [2024-10-13 20:00:11.635702] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.881 [2024-10-13 20:00:11.635718] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.881 [2024-10-13 20:00:11.635731] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.882 [2024-10-13 20:00:11.635751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.882 [2024-10-13 20:00:11.635785] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.882 [2024-10-13 20:00:11.635907] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.882 [2024-10-13 20:00:11.635946] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.882 [2024-10-13 20:00:11.635959] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.635971] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.882 [2024-10-13 20:00:11.636001] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636017] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636028] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.882 [2024-10-13 20:00:11.636047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.882 [2024-10-13 20:00:11.636079] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.882 [2024-10-13 20:00:11.636185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.882 [2024-10-13 20:00:11.636206] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.882 [2024-10-13 20:00:11.636218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636230] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.882 [2024-10-13 20:00:11.636257] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636284] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.882 [2024-10-13 20:00:11.636303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.882 [2024-10-13 20:00:11.636335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.882 [2024-10-13 20:00:11.636451] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.882 [2024-10-13 20:00:11.636473] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.882 [2024-10-13 20:00:11.636486] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636497] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.882 [2024-10-13 20:00:11.636525] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636541] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.882 [2024-10-13 20:00:11.636575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.882 [2024-10-13 20:00:11.636608] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.882 [2024-10-13 20:00:11.636724] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.882 [2024-10-13 20:00:11.636756] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.882 [2024-10-13 20:00:11.636770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636781] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.882 [2024-10-13 20:00:11.636810] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636825] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.636836] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.882 [2024-10-13 20:00:11.636855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.882 [2024-10-13 20:00:11.636886] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.882 [2024-10-13 20:00:11.636988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.882 [2024-10-13 20:00:11.637008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.882 [2024-10-13 20:00:11.637020] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.637032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.882 [2024-10-13 20:00:11.637059] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.637074] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.637086] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.882 [2024-10-13 20:00:11.637104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.882 [2024-10-13 20:00:11.637136] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.882 [2024-10-13 20:00:11.637240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.882 [2024-10-13 20:00:11.637261] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.882 [2024-10-13 20:00:11.637274] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.637285] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.882 [2024-10-13 20:00:11.637313] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.637328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.637339] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.882 [2024-10-13 20:00:11.637358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.882 [2024-10-13 20:00:11.637391] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.882 [2024-10-13 20:00:11.641436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.882 [2024-10-13 20:00:11.641455] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.882 [2024-10-13 20:00:11.641466] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.641477] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.882 [2024-10-13 20:00:11.641521] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.641537] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.641549] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:21.882 [2024-10-13 20:00:11.641578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.882 [2024-10-13 20:00:11.641614] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:21.882 [2024-10-13 20:00:11.641740] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.882 [2024-10-13 20:00:11.641761] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.882 [2024-10-13 20:00:11.641789] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.882 [2024-10-13 20:00:11.641800] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:21.882 [2024-10-13 20:00:11.641824] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:30:21.882 00:30:22.144 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:22.144 [2024-10-13 20:00:11.753911] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:30:22.144 [2024-10-13 20:00:11.754027] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090923 ] 00:30:22.144 [2024-10-13 20:00:11.820840] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:22.144 [2024-10-13 20:00:11.820966] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:22.144 [2024-10-13 20:00:11.820987] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:22.144 [2024-10-13 20:00:11.821021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:22.144 [2024-10-13 20:00:11.821049] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:22.144 [2024-10-13 20:00:11.821964] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:22.144 [2024-10-13 20:00:11.822059] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:22.144 [2024-10-13 20:00:11.846428] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:22.144 [2024-10-13 20:00:11.846465] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:22.144 [2024-10-13 20:00:11.846482] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:22.144 [2024-10-13 20:00:11.846493] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:22.144 [2024-10-13 20:00:11.846578] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.846601] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.846615] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.144 [2024-10-13 20:00:11.846657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:22.144 [2024-10-13 20:00:11.846714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.144 [2024-10-13 20:00:11.854428] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.144 [2024-10-13 20:00:11.854465] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.144 [2024-10-13 20:00:11.854480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.854494] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.144 [2024-10-13 20:00:11.854530] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:22.144 [2024-10-13 20:00:11.854565] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:22.144 [2024-10-13 20:00:11.854583] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:22.144 [2024-10-13 20:00:11.854618] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.854633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.854650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.144 [2024-10-13 20:00:11.854672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.144 [2024-10-13 20:00:11.854709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.144 [2024-10-13 20:00:11.854925] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.144 [2024-10-13 20:00:11.854947] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.144 [2024-10-13 20:00:11.854960] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.854972] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.144 [2024-10-13 20:00:11.854996] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:22.144 [2024-10-13 20:00:11.855021] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:22.144 [2024-10-13 20:00:11.855043] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.855062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.855073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.144 [2024-10-13 20:00:11.855096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.144 [2024-10-13 20:00:11.855131] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.144 [2024-10-13 20:00:11.855272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.144 [2024-10-13 20:00:11.855298] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.144 [2024-10-13 20:00:11.855310] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.855322] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.144 [2024-10-13 20:00:11.855339] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:22.144 [2024-10-13 20:00:11.855362] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:22.144 [2024-10-13 20:00:11.855382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.855404] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.855417] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.144 [2024-10-13 20:00:11.855436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.144 [2024-10-13 20:00:11.855476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.144 [2024-10-13 20:00:11.855674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.144 [2024-10-13 20:00:11.855696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.144 [2024-10-13 20:00:11.855708] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.855719] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.144 [2024-10-13 20:00:11.855735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:22.144 [2024-10-13 20:00:11.855770] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.855787] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.855799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.144 [2024-10-13 20:00:11.855818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.144 [2024-10-13 20:00:11.855850] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.144 [2024-10-13 20:00:11.855990] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.144 [2024-10-13 20:00:11.856011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.144 [2024-10-13 20:00:11.856022] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.856033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.144 [2024-10-13 20:00:11.856057] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:22.144 [2024-10-13 20:00:11.856074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:22.144 [2024-10-13 20:00:11.856097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:22.144 [2024-10-13 20:00:11.856215] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:22.144 [2024-10-13 20:00:11.856235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:22.144 [2024-10-13 20:00:11.856275] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.856290] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.144 [2024-10-13 20:00:11.856301] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.144 [2024-10-13 20:00:11.856319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.144 [2024-10-13 20:00:11.856351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.145 [2024-10-13 20:00:11.856557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.145 [2024-10-13 20:00:11.856585] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.145 [2024-10-13 20:00:11.856598] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.856609] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.145 [2024-10-13 20:00:11.856623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:22.145 [2024-10-13 20:00:11.856651] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.856666] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.856684] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.856703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-10-13 20:00:11.856736] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.145 [2024-10-13 20:00:11.856888] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.145 [2024-10-13 20:00:11.856908] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.145 [2024-10-13 20:00:11.856920] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.856930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.145 [2024-10-13 20:00:11.856952] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:22.145 [2024-10-13 20:00:11.856973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.856997] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:22.145 [2024-10-13 20:00:11.857018] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.857050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.857066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.857086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-10-13 20:00:11.857119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.145 [2024-10-13 20:00:11.857351] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:22.145 [2024-10-13 20:00:11.857373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:22.145 [2024-10-13 20:00:11.857385] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.857411] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:22.145 [2024-10-13 20:00:11.857426] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:22.145 [2024-10-13 20:00:11.857440] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.857471] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.857489] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.897575] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.145 [2024-10-13 20:00:11.897605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.145 [2024-10-13 20:00:11.897618] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.897631] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.145 [2024-10-13 20:00:11.897658] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:22.145 [2024-10-13 20:00:11.897675] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:22.145 [2024-10-13 20:00:11.897689] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:22.145 [2024-10-13 20:00:11.897702] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:22.145 [2024-10-13 20:00:11.897716] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:22.145 [2024-10-13 20:00:11.897739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.897770] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.897794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.897809] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.897820] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.897860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:22.145 [2024-10-13 20:00:11.897901] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.145 [2024-10-13 20:00:11.898057] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.145 [2024-10-13 20:00:11.898078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.145 [2024-10-13 20:00:11.898090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898101] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.145 [2024-10-13 20:00:11.898123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898138] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898149] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.898174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.145 [2024-10-13 20:00:11.898193] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898205] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.898231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.145 [2024-10-13 20:00:11.898247] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898259] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898269] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.898285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.145 [2024-10-13 20:00:11.898301] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898312] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.898337] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.898353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.145 [2024-10-13 20:00:11.898367] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.902421] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.902449] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.902462] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.902482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-10-13 20:00:11.902517] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:22.145 [2024-10-13 20:00:11.902552] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:22.145 [2024-10-13 20:00:11.902565] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:22.145 [2024-10-13 20:00:11.902577] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.145 [2024-10-13 20:00:11.902589] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:22.145 [2024-10-13 20:00:11.902761] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.145 [2024-10-13 20:00:11.902782] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.145 [2024-10-13 20:00:11.902794] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.902810] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:22.145 [2024-10-13 20:00:11.902830] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:22.145 [2024-10-13 20:00:11.902847] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.902870] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.902890] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.902915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.902928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.902940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.902960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:22.145 [2024-10-13 20:00:11.902993] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:22.145 [2024-10-13 20:00:11.903134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.145 [2024-10-13 20:00:11.903156] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.145 [2024-10-13 20:00:11.903168] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.903179] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:22.145 [2024-10-13 20:00:11.903276] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.903319] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:22.145 [2024-10-13 20:00:11.903350] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.903365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:22.145 [2024-10-13 20:00:11.903384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-10-13 20:00:11.903440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:22.145 [2024-10-13 20:00:11.903654] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:22.145 [2024-10-13 20:00:11.903677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:22.145 [2024-10-13 20:00:11.903689] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:22.145 [2024-10-13 20:00:11.903700] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:22.146 [2024-10-13 20:00:11.903712] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:22.146 [2024-10-13 20:00:11.903724] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.903747] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.903762] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.903781] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.146 [2024-10-13 20:00:11.903798] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.146 [2024-10-13 20:00:11.903809] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.903820] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:22.146 [2024-10-13 20:00:11.903869] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:22.146 [2024-10-13 20:00:11.903909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.903950] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.903994] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.904027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-10-13 20:00:11.904060] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:22.146 [2024-10-13 20:00:11.904268] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:22.146 [2024-10-13 20:00:11.904289] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:22.146 [2024-10-13 20:00:11.904301] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904311] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:22.146 [2024-10-13 20:00:11.904324] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:22.146 [2024-10-13 20:00:11.904335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904352] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904365] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904383] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.146 [2024-10-13 20:00:11.904406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.146 [2024-10-13 20:00:11.904419] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904430] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:22.146 [2024-10-13 20:00:11.904473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.904506] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.904533] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904548] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.904592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-10-13 20:00:11.904626] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:22.146 [2024-10-13 20:00:11.904808] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:22.146 [2024-10-13 20:00:11.904830] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:22.146 [2024-10-13 20:00:11.904855] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904867] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:22.146 [2024-10-13 20:00:11.904879] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:22.146 [2024-10-13 20:00:11.904890] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904908] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904921] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904939] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.146 [2024-10-13 20:00:11.904961] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.146 [2024-10-13 20:00:11.904973] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.904984] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:22.146 [2024-10-13 20:00:11.905012] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.905038] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.905063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.905081] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.905096] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.905125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.905142] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:22.146 [2024-10-13 20:00:11.905155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:22.146 [2024-10-13 20:00:11.905168] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:22.146 [2024-10-13 20:00:11.905222] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.905243] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.905262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-10-13 20:00:11.905287] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.905316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.905327] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.905344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.146 [2024-10-13 20:00:11.905376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:22.146 [2024-10-13 20:00:11.905418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:22.146 [2024-10-13 20:00:11.905632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.146 [2024-10-13 20:00:11.905655] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.146 [2024-10-13 20:00:11.905668] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.905680] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:22.146 [2024-10-13 20:00:11.905706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.146 [2024-10-13 20:00:11.905723] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.146 [2024-10-13 20:00:11.905734] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.905744] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:22.146 [2024-10-13 20:00:11.905770] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.905786] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.905804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-10-13 20:00:11.905841] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:22.146 [2024-10-13 20:00:11.905976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.146 [2024-10-13 20:00:11.905996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.146 [2024-10-13 20:00:11.906008] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.906019] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:22.146 [2024-10-13 20:00:11.906045] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.906061] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.906085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-10-13 20:00:11.906118] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:22.146 [2024-10-13 20:00:11.906220] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.146 [2024-10-13 20:00:11.906241] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.146 [2024-10-13 20:00:11.906253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.906263] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:22.146 [2024-10-13 20:00:11.906289] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.906305] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.906324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-10-13 20:00:11.906354] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:22.146 [2024-10-13 20:00:11.910416] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.146 [2024-10-13 20:00:11.910440] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.146 [2024-10-13 20:00:11.910451] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.910461] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:22.146 [2024-10-13 20:00:11.910520] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.910539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.910559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-10-13 20:00:11.910581] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.910595] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:22.146 [2024-10-13 20:00:11.910613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-10-13 20:00:11.910633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.146 [2024-10-13 20:00:11.910647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:22.147 [2024-10-13 20:00:11.910665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.147 [2024-10-13 20:00:11.910694] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.910709] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:22.147 [2024-10-13 20:00:11.910735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.147 [2024-10-13 20:00:11.910774] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:22.147 [2024-10-13 20:00:11.910810] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:22.147 [2024-10-13 20:00:11.910824] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:22.147 [2024-10-13 20:00:11.910835] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:22.147 [2024-10-13 20:00:11.911121] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:22.147 [2024-10-13 20:00:11.911142] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:22.147 [2024-10-13 20:00:11.911155] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911166] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:22.147 [2024-10-13 20:00:11.911179] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:22.147 [2024-10-13 20:00:11.911192] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911223] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911239] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:22.147 [2024-10-13 20:00:11.911277] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:22.147 [2024-10-13 20:00:11.911289] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911299] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:22.147 [2024-10-13 20:00:11.911311] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:22.147 [2024-10-13 20:00:11.911322] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911349] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911363] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911377] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:22.147 [2024-10-13 20:00:11.911391] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:22.147 [2024-10-13 20:00:11.911411] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911422] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:22.147 [2024-10-13 20:00:11.911433] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:22.147 [2024-10-13 20:00:11.911444] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911460] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911472] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911501] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:22.147 [2024-10-13 20:00:11.911516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:22.147 [2024-10-13 20:00:11.911527] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911536] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:22.147 [2024-10-13 20:00:11.911548] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:22.147 [2024-10-13 20:00:11.911558] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911574] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911586] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911603] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.147 [2024-10-13 20:00:11.911618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.147 [2024-10-13 20:00:11.911629] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911640] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:22.147 [2024-10-13 20:00:11.911681] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.147 [2024-10-13 20:00:11.911699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.147 [2024-10-13 20:00:11.911710] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911721] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:22.147 [2024-10-13 20:00:11.911747] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.147 [2024-10-13 20:00:11.911765] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.147 [2024-10-13 20:00:11.911776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911786] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:22.147 [2024-10-13 20:00:11.911805] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.147 [2024-10-13 20:00:11.911821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.147 [2024-10-13 20:00:11.911831] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.147 [2024-10-13 20:00:11.911841] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:22.147 ===================================================== 00:30:22.147 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.147 ===================================================== 00:30:22.147 Controller Capabilities/Features 00:30:22.147 ================================ 00:30:22.147 Vendor ID: 8086 00:30:22.147 Subsystem Vendor ID: 8086 00:30:22.147 Serial Number: SPDK00000000000001 00:30:22.147 Model Number: SPDK bdev Controller 00:30:22.147 Firmware Version: 25.01 00:30:22.147 Recommended Arb Burst: 6 00:30:22.147 IEEE OUI Identifier: e4 d2 5c 00:30:22.147 Multi-path I/O 00:30:22.147 May have multiple subsystem ports: Yes 00:30:22.147 May have multiple controllers: Yes 00:30:22.147 Associated with SR-IOV VF: No 00:30:22.147 Max Data Transfer Size: 131072 00:30:22.147 Max Number of Namespaces: 32 00:30:22.147 Max Number of I/O Queues: 127 00:30:22.147 NVMe Specification Version (VS): 1.3 00:30:22.147 NVMe Specification Version (Identify): 1.3 00:30:22.147 Maximum Queue Entries: 128 00:30:22.147 Contiguous Queues Required: Yes 00:30:22.147 Arbitration Mechanisms Supported 00:30:22.147 Weighted Round Robin: Not Supported 00:30:22.147 Vendor Specific: Not Supported 00:30:22.147 Reset Timeout: 15000 ms 00:30:22.147 Doorbell Stride: 4 bytes 00:30:22.147 NVM Subsystem Reset: Not Supported 00:30:22.147 Command Sets Supported 00:30:22.147 NVM Command Set: Supported 00:30:22.147 Boot Partition: Not Supported 00:30:22.147 Memory Page Size Minimum: 4096 bytes 00:30:22.147 Memory Page Size Maximum: 4096 bytes 00:30:22.147 Persistent Memory Region: Not Supported 00:30:22.147 Optional Asynchronous Events Supported 00:30:22.147 Namespace Attribute Notices: Supported 00:30:22.147 Firmware Activation Notices: Not Supported 00:30:22.147 ANA Change Notices: Not Supported 00:30:22.147 PLE Aggregate Log Change Notices: Not Supported 00:30:22.147 LBA Status Info Alert Notices: Not Supported 00:30:22.147 EGE Aggregate Log Change Notices: Not Supported 00:30:22.147 Normal NVM Subsystem Shutdown event: Not Supported 00:30:22.147 Zone Descriptor Change Notices: Not Supported 00:30:22.147 Discovery Log Change Notices: Not Supported 00:30:22.147 Controller Attributes 00:30:22.147 128-bit Host Identifier: Supported 00:30:22.147 Non-Operational Permissive Mode: Not Supported 00:30:22.147 NVM Sets: Not Supported 00:30:22.147 Read Recovery Levels: Not Supported 00:30:22.147 Endurance Groups: Not Supported 00:30:22.147 Predictable Latency Mode: Not Supported 00:30:22.147 Traffic Based Keep ALive: Not Supported 00:30:22.147 Namespace Granularity: Not Supported 00:30:22.147 SQ Associations: Not Supported 00:30:22.147 UUID List: Not Supported 00:30:22.147 Multi-Domain Subsystem: Not Supported 00:30:22.147 Fixed Capacity Management: Not Supported 00:30:22.147 Variable Capacity Management: Not Supported 00:30:22.147 Delete Endurance Group: Not Supported 00:30:22.147 Delete NVM Set: Not Supported 00:30:22.147 Extended LBA Formats Supported: Not Supported 00:30:22.147 Flexible Data Placement Supported: Not Supported 00:30:22.147 00:30:22.147 Controller Memory Buffer Support 00:30:22.147 ================================ 00:30:22.147 Supported: No 00:30:22.147 00:30:22.147 Persistent Memory Region Support 00:30:22.147 ================================ 00:30:22.147 Supported: No 00:30:22.147 00:30:22.147 Admin Command Set Attributes 00:30:22.147 ============================ 00:30:22.147 Security Send/Receive: Not Supported 00:30:22.147 Format NVM: Not Supported 00:30:22.147 Firmware Activate/Download: Not Supported 00:30:22.147 Namespace Management: Not Supported 00:30:22.147 Device Self-Test: Not Supported 00:30:22.147 Directives: Not Supported 00:30:22.147 NVMe-MI: Not Supported 00:30:22.147 Virtualization Management: Not Supported 00:30:22.147 Doorbell Buffer Config: Not Supported 00:30:22.147 Get LBA Status Capability: Not Supported 00:30:22.147 Command & Feature Lockdown Capability: Not Supported 00:30:22.147 Abort Command Limit: 4 00:30:22.147 Async Event Request Limit: 4 00:30:22.147 Number of Firmware Slots: N/A 00:30:22.147 Firmware Slot 1 Read-Only: N/A 00:30:22.147 Firmware Activation Without Reset: N/A 00:30:22.147 Multiple Update Detection Support: N/A 00:30:22.147 Firmware Update Granularity: No Information Provided 00:30:22.147 Per-Namespace SMART Log: No 00:30:22.147 Asymmetric Namespace Access Log Page: Not Supported 00:30:22.147 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:22.147 Command Effects Log Page: Supported 00:30:22.147 Get Log Page Extended Data: Supported 00:30:22.148 Telemetry Log Pages: Not Supported 00:30:22.148 Persistent Event Log Pages: Not Supported 00:30:22.148 Supported Log Pages Log Page: May Support 00:30:22.148 Commands Supported & Effects Log Page: Not Supported 00:30:22.148 Feature Identifiers & Effects Log Page:May Support 00:30:22.148 NVMe-MI Commands & Effects Log Page: May Support 00:30:22.148 Data Area 4 for Telemetry Log: Not Supported 00:30:22.148 Error Log Page Entries Supported: 128 00:30:22.148 Keep Alive: Supported 00:30:22.148 Keep Alive Granularity: 10000 ms 00:30:22.148 00:30:22.148 NVM Command Set Attributes 00:30:22.148 ========================== 00:30:22.148 Submission Queue Entry Size 00:30:22.148 Max: 64 00:30:22.148 Min: 64 00:30:22.148 Completion Queue Entry Size 00:30:22.148 Max: 16 00:30:22.148 Min: 16 00:30:22.148 Number of Namespaces: 32 00:30:22.148 Compare Command: Supported 00:30:22.148 Write Uncorrectable Command: Not Supported 00:30:22.148 Dataset Management Command: Supported 00:30:22.148 Write Zeroes Command: Supported 00:30:22.148 Set Features Save Field: Not Supported 00:30:22.148 Reservations: Supported 00:30:22.148 Timestamp: Not Supported 00:30:22.148 Copy: Supported 00:30:22.148 Volatile Write Cache: Present 00:30:22.148 Atomic Write Unit (Normal): 1 00:30:22.148 Atomic Write Unit (PFail): 1 00:30:22.148 Atomic Compare & Write Unit: 1 00:30:22.148 Fused Compare & Write: Supported 00:30:22.148 Scatter-Gather List 00:30:22.148 SGL Command Set: Supported 00:30:22.148 SGL Keyed: Supported 00:30:22.148 SGL Bit Bucket Descriptor: Not Supported 00:30:22.148 SGL Metadata Pointer: Not Supported 00:30:22.148 Oversized SGL: Not Supported 00:30:22.148 SGL Metadata Address: Not Supported 00:30:22.148 SGL Offset: Supported 00:30:22.148 Transport SGL Data Block: Not Supported 00:30:22.148 Replay Protected Memory Block: Not Supported 00:30:22.148 00:30:22.148 Firmware Slot Information 00:30:22.148 ========================= 00:30:22.148 Active slot: 1 00:30:22.148 Slot 1 Firmware Revision: 25.01 00:30:22.148 00:30:22.148 00:30:22.148 Commands Supported and Effects 00:30:22.148 ============================== 00:30:22.148 Admin Commands 00:30:22.148 -------------- 00:30:22.148 Get Log Page (02h): Supported 00:30:22.148 Identify (06h): Supported 00:30:22.148 Abort (08h): Supported 00:30:22.148 Set Features (09h): Supported 00:30:22.148 Get Features (0Ah): Supported 00:30:22.148 Asynchronous Event Request (0Ch): Supported 00:30:22.148 Keep Alive (18h): Supported 00:30:22.148 I/O Commands 00:30:22.148 ------------ 00:30:22.148 Flush (00h): Supported LBA-Change 00:30:22.148 Write (01h): Supported LBA-Change 00:30:22.148 Read (02h): Supported 00:30:22.148 Compare (05h): Supported 00:30:22.148 Write Zeroes (08h): Supported LBA-Change 00:30:22.148 Dataset Management (09h): Supported LBA-Change 00:30:22.148 Copy (19h): Supported LBA-Change 00:30:22.148 00:30:22.148 Error Log 00:30:22.148 ========= 00:30:22.148 00:30:22.148 Arbitration 00:30:22.148 =========== 00:30:22.148 Arbitration Burst: 1 00:30:22.148 00:30:22.148 Power Management 00:30:22.148 ================ 00:30:22.148 Number of Power States: 1 00:30:22.148 Current Power State: Power State #0 00:30:22.148 Power State #0: 00:30:22.148 Max Power: 0.00 W 00:30:22.148 Non-Operational State: Operational 00:30:22.148 Entry Latency: Not Reported 00:30:22.148 Exit Latency: Not Reported 00:30:22.148 Relative Read Throughput: 0 00:30:22.148 Relative Read Latency: 0 00:30:22.148 Relative Write Throughput: 0 00:30:22.148 Relative Write Latency: 0 00:30:22.148 Idle Power: Not Reported 00:30:22.148 Active Power: Not Reported 00:30:22.148 Non-Operational Permissive Mode: Not Supported 00:30:22.148 00:30:22.148 Health Information 00:30:22.148 ================== 00:30:22.148 Critical Warnings: 00:30:22.148 Available Spare Space: OK 00:30:22.148 Temperature: OK 00:30:22.148 Device Reliability: OK 00:30:22.148 Read Only: No 00:30:22.148 Volatile Memory Backup: OK 00:30:22.148 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:22.148 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:22.148 Available Spare: 0% 00:30:22.148 Available Spare Threshold: 0% 00:30:22.148 Life Percentage Used:[2024-10-13 20:00:11.912041] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.148 [2024-10-13 20:00:11.912060] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:22.148 [2024-10-13 20:00:11.912080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.148 [2024-10-13 20:00:11.912129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:22.148 [2024-10-13 20:00:11.912275] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.148 [2024-10-13 20:00:11.912297] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.148 [2024-10-13 20:00:11.912309] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.148 [2024-10-13 20:00:11.912321] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:22.148 [2024-10-13 20:00:11.912408] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:22.148 [2024-10-13 20:00:11.912443] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:22.148 [2024-10-13 20:00:11.912465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.148 [2024-10-13 20:00:11.912488] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:22.148 [2024-10-13 20:00:11.912502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.148 [2024-10-13 20:00:11.912515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:22.148 [2024-10-13 20:00:11.912528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.148 [2024-10-13 20:00:11.912540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.148 [2024-10-13 20:00:11.912567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.148 [2024-10-13 20:00:11.912589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.148 [2024-10-13 20:00:11.912604] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.148 [2024-10-13 20:00:11.912619] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.148 [2024-10-13 20:00:11.912639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.148 [2024-10-13 20:00:11.912675] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.148 [2024-10-13 20:00:11.912828] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.148 [2024-10-13 20:00:11.912850] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.148 [2024-10-13 20:00:11.912862] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.148 [2024-10-13 20:00:11.912874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.148 [2024-10-13 20:00:11.912901] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.148 [2024-10-13 20:00:11.912917] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.148 [2024-10-13 20:00:11.912929] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.148 [2024-10-13 20:00:11.912948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.148 [2024-10-13 20:00:11.912988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.148 [2024-10-13 20:00:11.913160] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.148 [2024-10-13 20:00:11.913181] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.148 [2024-10-13 20:00:11.913192] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.148 [2024-10-13 20:00:11.913204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.148 [2024-10-13 20:00:11.913220] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:22.148 [2024-10-13 20:00:11.913233] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:22.148 [2024-10-13 20:00:11.913259] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.913275] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.913294] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.149 [2024-10-13 20:00:11.913313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.149 [2024-10-13 20:00:11.913345] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.149 [2024-10-13 20:00:11.913480] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.149 [2024-10-13 20:00:11.913501] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.149 [2024-10-13 20:00:11.913512] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.913523] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.149 [2024-10-13 20:00:11.913552] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.913568] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.913579] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.149 [2024-10-13 20:00:11.913596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.149 [2024-10-13 20:00:11.913627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.149 [2024-10-13 20:00:11.913728] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.149 [2024-10-13 20:00:11.913748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.149 [2024-10-13 20:00:11.913760] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.913771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.149 [2024-10-13 20:00:11.913802] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.913818] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.913829] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.149 [2024-10-13 20:00:11.913852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.149 [2024-10-13 20:00:11.913883] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.149 [2024-10-13 20:00:11.913988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.149 [2024-10-13 20:00:11.914009] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.149 [2024-10-13 20:00:11.914021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.914032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.149 [2024-10-13 20:00:11.914059] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.914074] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.914085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.149 [2024-10-13 20:00:11.914102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.149 [2024-10-13 20:00:11.914133] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.149 [2024-10-13 20:00:11.914240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.149 [2024-10-13 20:00:11.914261] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.149 [2024-10-13 20:00:11.914273] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.914284] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.149 [2024-10-13 20:00:11.914311] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.914327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.914337] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.149 [2024-10-13 20:00:11.914355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.149 [2024-10-13 20:00:11.914385] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.149 [2024-10-13 20:00:11.918439] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.149 [2024-10-13 20:00:11.918459] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.149 [2024-10-13 20:00:11.918471] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.918481] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.149 [2024-10-13 20:00:11.918509] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.918524] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.918535] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:22.149 [2024-10-13 20:00:11.918553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.149 [2024-10-13 20:00:11.918583] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:22.149 [2024-10-13 20:00:11.918729] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:22.149 [2024-10-13 20:00:11.918749] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:22.149 [2024-10-13 20:00:11.918761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:22.149 [2024-10-13 20:00:11.918771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:22.149 [2024-10-13 20:00:11.918800] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:30:22.407 0% 00:30:22.407 Data Units Read: 0 00:30:22.407 Data Units Written: 0 00:30:22.407 Host Read Commands: 0 00:30:22.407 Host Write Commands: 0 00:30:22.407 Controller Busy Time: 0 minutes 00:30:22.407 Power Cycles: 0 00:30:22.407 Power On Hours: 0 hours 00:30:22.407 Unsafe Shutdowns: 0 00:30:22.407 Unrecoverable Media Errors: 0 00:30:22.407 Lifetime Error Log Entries: 0 00:30:22.407 Warning Temperature Time: 0 minutes 00:30:22.407 Critical Temperature Time: 0 minutes 00:30:22.407 00:30:22.407 Number of Queues 00:30:22.407 ================ 00:30:22.407 Number of I/O Submission Queues: 127 00:30:22.407 Number of I/O Completion Queues: 127 00:30:22.407 00:30:22.407 Active Namespaces 00:30:22.407 ================= 00:30:22.407 Namespace ID:1 00:30:22.407 Error Recovery Timeout: Unlimited 00:30:22.408 Command Set Identifier: NVM (00h) 00:30:22.408 Deallocate: Supported 00:30:22.408 Deallocated/Unwritten Error: Not Supported 00:30:22.408 Deallocated Read Value: Unknown 00:30:22.408 Deallocate in Write Zeroes: Not Supported 00:30:22.408 Deallocated Guard Field: 0xFFFF 00:30:22.408 Flush: Supported 00:30:22.408 Reservation: Supported 00:30:22.408 Namespace Sharing Capabilities: Multiple Controllers 00:30:22.408 Size (in LBAs): 131072 (0GiB) 00:30:22.408 Capacity (in LBAs): 131072 (0GiB) 00:30:22.408 Utilization (in LBAs): 131072 (0GiB) 00:30:22.408 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:22.408 EUI64: ABCDEF0123456789 00:30:22.408 UUID: 36772bc5-6686-449d-9bb3-e4a5842b9a87 00:30:22.408 Thin Provisioning: Not Supported 00:30:22.408 Per-NS Atomic Units: Yes 00:30:22.408 Atomic Boundary Size (Normal): 0 00:30:22.408 Atomic Boundary Size (PFail): 0 00:30:22.408 Atomic Boundary Offset: 0 00:30:22.408 Maximum Single Source Range Length: 65535 00:30:22.408 Maximum Copy Length: 65535 00:30:22.408 Maximum Source Range Count: 1 00:30:22.408 NGUID/EUI64 Never Reused: No 00:30:22.408 Namespace Write Protected: No 00:30:22.408 Number of LBA Formats: 1 00:30:22.408 Current LBA Format: LBA Format #00 00:30:22.408 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:22.408 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:22.408 20:00:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:22.408 rmmod nvme_tcp 00:30:22.408 rmmod nvme_fabrics 00:30:22.408 rmmod nvme_keyring 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3090641 ']' 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3090641 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3090641 ']' 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3090641 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3090641 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3090641' 00:30:22.408 killing process with pid 3090641 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3090641 00:30:22.408 20:00:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3090641 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.811 20:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.792 00:30:25.792 real 0m7.448s 00:30:25.792 user 0m11.320s 00:30:25.792 sys 0m2.083s 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:25.792 ************************************ 00:30:25.792 END TEST nvmf_identify 00:30:25.792 ************************************ 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.792 ************************************ 00:30:25.792 START TEST nvmf_perf 00:30:25.792 ************************************ 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:25.792 * Looking for test storage... 00:30:25.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:25.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.792 --rc genhtml_branch_coverage=1 00:30:25.792 --rc genhtml_function_coverage=1 00:30:25.792 --rc genhtml_legend=1 00:30:25.792 --rc geninfo_all_blocks=1 00:30:25.792 --rc geninfo_unexecuted_blocks=1 00:30:25.792 00:30:25.792 ' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:25.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.792 --rc genhtml_branch_coverage=1 00:30:25.792 --rc genhtml_function_coverage=1 00:30:25.792 --rc genhtml_legend=1 00:30:25.792 --rc geninfo_all_blocks=1 00:30:25.792 --rc geninfo_unexecuted_blocks=1 00:30:25.792 00:30:25.792 ' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:25.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.792 --rc genhtml_branch_coverage=1 00:30:25.792 --rc genhtml_function_coverage=1 00:30:25.792 --rc genhtml_legend=1 00:30:25.792 --rc geninfo_all_blocks=1 00:30:25.792 --rc geninfo_unexecuted_blocks=1 00:30:25.792 00:30:25.792 ' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:25.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.792 --rc genhtml_branch_coverage=1 00:30:25.792 --rc genhtml_function_coverage=1 00:30:25.792 --rc genhtml_legend=1 00:30:25.792 --rc geninfo_all_blocks=1 00:30:25.792 --rc geninfo_unexecuted_blocks=1 00:30:25.792 00:30:25.792 ' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.792 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:25.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:25.793 20:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:28.336 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:28.336 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:28.336 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:28.336 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:30:28.336 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:30:28.337 00:30:28.337 --- 10.0.0.2 ping statistics --- 00:30:28.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.337 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:30:28.337 00:30:28.337 --- 10.0.0.1 ping statistics --- 00:30:28.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.337 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3092997 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3092997 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3092997 ']' 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:28.337 20:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:28.337 [2024-10-13 20:00:17.852881] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:30:28.337 [2024-10-13 20:00:17.853019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.337 [2024-10-13 20:00:17.989824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.337 [2024-10-13 20:00:18.127339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.337 [2024-10-13 20:00:18.127438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.337 [2024-10-13 20:00:18.127465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.337 [2024-10-13 20:00:18.127489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.337 [2024-10-13 20:00:18.127508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.337 [2024-10-13 20:00:18.130383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.337 [2024-10-13 20:00:18.130456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.337 [2024-10-13 20:00:18.130481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.337 [2024-10-13 20:00:18.130489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.274 20:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.274 20:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:30:29.274 20:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:29.274 20:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:29.274 20:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:29.274 20:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.274 20:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:29.274 20:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:32.564 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:32.564 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:32.564 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:32.564 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:33.135 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:33.135 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:33.135 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:33.135 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:33.135 20:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:33.394 [2024-10-13 20:00:22.980585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.394 20:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:33.652 20:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:33.652 20:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:33.910 20:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:33.910 20:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:34.169 20:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.429 [2024-10-13 20:00:24.220744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.429 20:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:34.997 20:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:34.997 20:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:34.997 20:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:34.997 20:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:36.375 Initializing NVMe Controllers 00:30:36.375 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:36.375 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:36.375 Initialization complete. Launching workers. 00:30:36.375 ======================================================== 00:30:36.375 Latency(us) 00:30:36.375 Device Information : IOPS MiB/s Average min max 00:30:36.375 PCIE (0000:88:00.0) NSID 1 from core 0: 75296.62 294.13 424.26 24.21 4395.46 00:30:36.375 ======================================================== 00:30:36.375 Total : 75296.62 294.13 424.26 24.21 4395.46 00:30:36.375 00:30:36.375 20:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.758 Initializing NVMe Controllers 00:30:37.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:37.758 Initialization complete. Launching workers. 00:30:37.758 ======================================================== 00:30:37.758 Latency(us) 00:30:37.758 Device Information : IOPS MiB/s Average min max 00:30:37.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10074.73 199.15 45219.54 00:30:37.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14188.70 7908.61 47911.99 00:30:37.758 ======================================================== 00:30:37.758 Total : 171.00 0.67 11782.87 199.15 47911.99 00:30:37.758 00:30:38.017 20:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:39.399 Initializing NVMe Controllers 00:30:39.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:39.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:39.400 Initialization complete. Launching workers. 00:30:39.400 ======================================================== 00:30:39.400 Latency(us) 00:30:39.400 Device Information : IOPS MiB/s Average min max 00:30:39.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5477.38 21.40 5844.47 895.29 12551.73 00:30:39.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3720.32 14.53 8626.80 5507.64 22290.90 00:30:39.400 ======================================================== 00:30:39.400 Total : 9197.70 35.93 6969.88 895.29 22290.90 00:30:39.400 00:30:39.400 20:00:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:39.400 20:00:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:39.400 20:00:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:42.696 Initializing NVMe Controllers 00:30:42.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.696 Controller IO queue size 128, less than required. 00:30:42.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:42.696 Controller IO queue size 128, less than required. 00:30:42.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:42.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:42.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:42.696 Initialization complete. Launching workers. 00:30:42.696 ======================================================== 00:30:42.696 Latency(us) 00:30:42.696 Device Information : IOPS MiB/s Average min max 00:30:42.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1316.66 329.17 99956.09 62957.09 225178.31 00:30:42.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 545.15 136.29 260523.95 135467.43 509571.64 00:30:42.696 ======================================================== 00:30:42.696 Total : 1861.81 465.45 146971.53 62957.09 509571.64 00:30:42.696 00:30:42.696 20:00:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:42.696 No valid NVMe controllers or AIO or URING devices found 00:30:42.696 Initializing NVMe Controllers 00:30:42.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.696 Controller IO queue size 128, less than required. 00:30:42.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:42.696 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:42.696 Controller IO queue size 128, less than required. 00:30:42.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:42.696 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:42.696 WARNING: Some requested NVMe devices were skipped 00:30:42.696 20:00:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:45.991 Initializing NVMe Controllers 00:30:45.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.991 Controller IO queue size 128, less than required. 00:30:45.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.991 Controller IO queue size 128, less than required. 00:30:45.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:45.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:45.991 Initialization complete. Launching workers. 00:30:45.991 00:30:45.991 ==================== 00:30:45.991 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:45.991 TCP transport: 00:30:45.991 polls: 5629 00:30:45.991 idle_polls: 3078 00:30:45.991 sock_completions: 2551 00:30:45.991 nvme_completions: 4949 00:30:45.991 submitted_requests: 7360 00:30:45.991 queued_requests: 1 00:30:45.991 00:30:45.991 ==================== 00:30:45.991 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:45.991 TCP transport: 00:30:45.991 polls: 6362 00:30:45.991 idle_polls: 3803 00:30:45.991 sock_completions: 2559 00:30:45.991 nvme_completions: 5057 00:30:45.991 submitted_requests: 7590 00:30:45.991 queued_requests: 1 00:30:45.991 ======================================================== 00:30:45.991 Latency(us) 00:30:45.991 Device Information : IOPS MiB/s Average min max 00:30:45.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1234.34 308.59 112468.50 65009.00 419102.71 00:30:45.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1261.29 315.32 102638.61 62498.18 299335.57 00:30:45.991 ======================================================== 00:30:45.991 Total : 2495.63 623.91 107500.50 62498.18 419102.71 00:30:45.991 00:30:45.991 20:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:45.991 20:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:45.991 20:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:45.991 20:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:45.991 20:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:49.284 20:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=aa53e086-65c5-4dc2-b7b2-f0c5f660e4f7 00:30:49.284 20:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb aa53e086-65c5-4dc2-b7b2-f0c5f660e4f7 00:30:49.284 20:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=aa53e086-65c5-4dc2-b7b2-f0c5f660e4f7 00:30:49.284 20:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:49.284 20:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:49.284 20:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:49.284 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:49.542 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:49.542 { 00:30:49.542 "uuid": "aa53e086-65c5-4dc2-b7b2-f0c5f660e4f7", 00:30:49.542 "name": "lvs_0", 00:30:49.542 "base_bdev": "Nvme0n1", 00:30:49.542 "total_data_clusters": 238234, 00:30:49.542 "free_clusters": 238234, 00:30:49.542 "block_size": 512, 00:30:49.542 "cluster_size": 4194304 00:30:49.542 } 00:30:49.542 ]' 00:30:49.542 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="aa53e086-65c5-4dc2-b7b2-f0c5f660e4f7") .free_clusters' 00:30:49.801 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:49.801 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="aa53e086-65c5-4dc2-b7b2-f0c5f660e4f7") .cluster_size' 00:30:49.801 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:49.801 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:49.801 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:49.801 952936 00:30:49.801 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:49.801 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:49.801 20:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa53e086-65c5-4dc2-b7b2-f0c5f660e4f7 lbd_0 20480 00:30:50.368 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=72ed7683-3c47-464f-8ea5-72e65c5d0a51 00:30:50.368 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 72ed7683-3c47-464f-8ea5-72e65c5d0a51 lvs_n_0 00:30:51.306 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=9d6885bd-559f-46a3-87fb-062cbeb30aeb 00:30:51.306 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 9d6885bd-559f-46a3-87fb-062cbeb30aeb 00:30:51.306 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=9d6885bd-559f-46a3-87fb-062cbeb30aeb 00:30:51.306 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:51.306 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:51.306 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:51.306 20:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:51.564 { 00:30:51.564 "uuid": "aa53e086-65c5-4dc2-b7b2-f0c5f660e4f7", 00:30:51.564 "name": "lvs_0", 00:30:51.564 "base_bdev": "Nvme0n1", 00:30:51.564 "total_data_clusters": 238234, 00:30:51.564 "free_clusters": 233114, 00:30:51.564 "block_size": 512, 00:30:51.564 "cluster_size": 4194304 00:30:51.564 }, 00:30:51.564 { 00:30:51.564 "uuid": "9d6885bd-559f-46a3-87fb-062cbeb30aeb", 00:30:51.564 "name": "lvs_n_0", 00:30:51.564 "base_bdev": "72ed7683-3c47-464f-8ea5-72e65c5d0a51", 00:30:51.564 "total_data_clusters": 5114, 00:30:51.564 "free_clusters": 5114, 00:30:51.564 "block_size": 512, 00:30:51.564 "cluster_size": 4194304 00:30:51.564 } 00:30:51.564 ]' 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9d6885bd-559f-46a3-87fb-062cbeb30aeb") .free_clusters' 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9d6885bd-559f-46a3-87fb-062cbeb30aeb") .cluster_size' 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:51.564 20456 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:51.564 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d6885bd-559f-46a3-87fb-062cbeb30aeb lbd_nest_0 20456 00:30:51.823 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=9701a2ec-6c17-49c3-9b6d-29521b4a15ef 00:30:51.823 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.082 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:52.082 20:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9701a2ec-6c17-49c3-9b6d-29521b4a15ef 00:30:52.340 20:00:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.599 20:00:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:52.599 20:00:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:52.599 20:00:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:52.599 20:00:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:52.599 20:00:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:04.814 Initializing NVMe Controllers 00:31:04.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.814 Initialization complete. Launching workers. 00:31:04.814 ======================================================== 00:31:04.814 Latency(us) 00:31:04.814 Device Information : IOPS MiB/s Average min max 00:31:04.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.40 0.02 22576.77 246.89 45780.94 00:31:04.814 ======================================================== 00:31:04.814 Total : 44.40 0.02 22576.77 246.89 45780.94 00:31:04.814 00:31:04.814 20:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:04.814 20:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:14.799 Initializing NVMe Controllers 00:31:14.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:14.799 Initialization complete. Launching workers. 00:31:14.799 ======================================================== 00:31:14.799 Latency(us) 00:31:14.799 Device Information : IOPS MiB/s Average min max 00:31:14.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.50 10.06 12429.27 5967.12 47928.86 00:31:14.799 ======================================================== 00:31:14.799 Total : 80.50 10.06 12429.27 5967.12 47928.86 00:31:14.799 00:31:14.799 20:01:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:14.799 20:01:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:14.799 20:01:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:24.841 Initializing NVMe Controllers 00:31:24.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.841 Initialization complete. Launching workers. 00:31:24.841 ======================================================== 00:31:24.841 Latency(us) 00:31:24.841 Device Information : IOPS MiB/s Average min max 00:31:24.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4712.40 2.30 6789.46 650.57 16242.59 00:31:24.841 ======================================================== 00:31:24.841 Total : 4712.40 2.30 6789.46 650.57 16242.59 00:31:24.841 00:31:24.841 20:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:24.841 20:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.819 Initializing NVMe Controllers 00:31:34.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:34.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:34.819 Initialization complete. Launching workers. 00:31:34.819 ======================================================== 00:31:34.819 Latency(us) 00:31:34.819 Device Information : IOPS MiB/s Average min max 00:31:34.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3544.28 443.04 9028.73 1147.29 20322.15 00:31:34.819 ======================================================== 00:31:34.819 Total : 3544.28 443.04 9028.73 1147.29 20322.15 00:31:34.819 00:31:34.819 20:01:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:34.819 20:01:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:34.819 20:01:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:44.790 Initializing NVMe Controllers 00:31:44.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:44.790 Controller IO queue size 128, less than required. 00:31:44.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:44.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:44.790 Initialization complete. Launching workers. 00:31:44.790 ======================================================== 00:31:44.790 Latency(us) 00:31:44.790 Device Information : IOPS MiB/s Average min max 00:31:44.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8237.92 4.02 15537.84 2069.51 38238.51 00:31:44.790 ======================================================== 00:31:44.790 Total : 8237.92 4.02 15537.84 2069.51 38238.51 00:31:44.790 00:31:44.790 20:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:44.790 20:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:56.986 Initializing NVMe Controllers 00:31:56.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.986 Controller IO queue size 128, less than required. 00:31:56.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:56.986 Initialization complete. Launching workers. 00:31:56.986 ======================================================== 00:31:56.986 Latency(us) 00:31:56.986 Device Information : IOPS MiB/s Average min max 00:31:56.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1175.32 146.91 109410.37 23723.95 222312.74 00:31:56.986 ======================================================== 00:31:56.986 Total : 1175.32 146.91 109410.37 23723.95 222312.74 00:31:56.986 00:31:56.986 20:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.986 20:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9701a2ec-6c17-49c3-9b6d-29521b4a15ef 00:31:56.986 20:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:56.986 20:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 72ed7683-3c47-464f-8ea5-72e65c5d0a51 00:31:57.244 20:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.502 rmmod nvme_tcp 00:31:57.502 rmmod nvme_fabrics 00:31:57.502 rmmod nvme_keyring 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3092997 ']' 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3092997 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3092997 ']' 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3092997 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3092997 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3092997' 00:31:57.502 killing process with pid 3092997 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3092997 00:31:57.502 20:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3092997 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.932 20:01:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.932 00:32:01.932 real 1m36.230s 00:32:01.932 user 5m57.603s 00:32:01.932 sys 0m15.390s 00:32:01.932 20:01:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:01.932 20:01:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:01.932 ************************************ 00:32:01.932 END TEST nvmf_perf 00:32:01.932 ************************************ 00:32:01.932 20:01:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:01.932 20:01:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:01.932 20:01:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.932 20:01:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.932 ************************************ 00:32:01.932 START TEST nvmf_fio_host 00:32:01.932 ************************************ 00:32:01.932 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:02.191 * Looking for test storage... 00:32:02.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:02.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.191 --rc genhtml_branch_coverage=1 00:32:02.191 --rc genhtml_function_coverage=1 00:32:02.191 --rc genhtml_legend=1 00:32:02.191 --rc geninfo_all_blocks=1 00:32:02.191 --rc geninfo_unexecuted_blocks=1 00:32:02.191 00:32:02.191 ' 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:02.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.191 --rc genhtml_branch_coverage=1 00:32:02.191 --rc genhtml_function_coverage=1 00:32:02.191 --rc genhtml_legend=1 00:32:02.191 --rc geninfo_all_blocks=1 00:32:02.191 --rc geninfo_unexecuted_blocks=1 00:32:02.191 00:32:02.191 ' 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:02.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.191 --rc genhtml_branch_coverage=1 00:32:02.191 --rc genhtml_function_coverage=1 00:32:02.191 --rc genhtml_legend=1 00:32:02.191 --rc geninfo_all_blocks=1 00:32:02.191 --rc geninfo_unexecuted_blocks=1 00:32:02.191 00:32:02.191 ' 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:02.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.191 --rc genhtml_branch_coverage=1 00:32:02.191 --rc genhtml_function_coverage=1 00:32:02.191 --rc genhtml_legend=1 00:32:02.191 --rc geninfo_all_blocks=1 00:32:02.191 --rc geninfo_unexecuted_blocks=1 00:32:02.191 00:32:02.191 ' 00:32:02.191 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:02.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:02.192 20:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:04.108 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:04.108 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:04.108 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:04.108 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.108 20:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:04.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:32:04.367 00:32:04.367 --- 10.0.0.2 ping statistics --- 00:32:04.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.367 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:32:04.367 00:32:04.367 --- 10.0.0.1 ping statistics --- 00:32:04.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.367 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3105637 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3105637 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3105637 ']' 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:04.367 20:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.625 [2024-10-13 20:01:54.211508] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:32:04.626 [2024-10-13 20:01:54.211661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.626 [2024-10-13 20:01:54.358944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:04.884 [2024-10-13 20:01:54.501736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.884 [2024-10-13 20:01:54.501817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.884 [2024-10-13 20:01:54.501843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.884 [2024-10-13 20:01:54.501867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.884 [2024-10-13 20:01:54.501887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.884 [2024-10-13 20:01:54.504744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.884 [2024-10-13 20:01:54.504817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:04.884 [2024-10-13 20:01:54.504918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.884 [2024-10-13 20:01:54.504923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.449 20:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:05.449 20:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:32:05.449 20:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:05.707 [2024-10-13 20:01:55.436223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.707 20:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:05.707 20:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:05.707 20:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.707 20:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:06.272 Malloc1 00:32:06.272 20:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:06.530 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:06.788 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.045 [2024-10-13 20:01:56.700048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.045 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:07.304 20:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:07.304 20:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:07.304 20:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:07.304 20:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:07.304 20:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:07.304 20:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:07.561 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:07.561 fio-3.35 00:32:07.561 Starting 1 thread 00:32:10.088 00:32:10.088 test: (groupid=0, jobs=1): err= 0: pid=3106114: Sun Oct 13 20:01:59 2024 00:32:10.088 read: IOPS=6470, BW=25.3MiB/s (26.5MB/s)(50.8MiB/2009msec) 00:32:10.088 slat (usec): min=3, max=191, avg= 3.84, stdev= 2.48 00:32:10.088 clat (usec): min=3702, max=19353, avg=10687.77, stdev=938.72 00:32:10.088 lat (usec): min=3753, max=19357, avg=10691.62, stdev=938.57 00:32:10.088 clat percentiles (usec): 00:32:10.088 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:32:10.088 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:32:10.088 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:32:10.088 | 99.00th=[12780], 99.50th=[13042], 99.90th=[17695], 99.95th=[18744], 00:32:10.088 | 99.99th=[19268] 00:32:10.088 bw ( KiB/s): min=24760, max=26544, per=99.91%, avg=25860.00, stdev=795.74, samples=4 00:32:10.088 iops : min= 6190, max= 6636, avg=6465.00, stdev=198.93, samples=4 00:32:10.088 write: IOPS=6476, BW=25.3MiB/s (26.5MB/s)(50.8MiB/2009msec); 0 zone resets 00:32:10.088 slat (usec): min=3, max=153, avg= 3.96, stdev= 2.09 00:32:10.088 clat (usec): min=1722, max=17936, avg=8951.42, stdev=788.60 00:32:10.088 lat (usec): min=1731, max=17941, avg=8955.38, stdev=788.55 00:32:10.088 clat percentiles (usec): 00:32:10.088 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:32:10.088 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:10.088 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:32:10.088 | 99.00th=[10552], 99.50th=[10945], 99.90th=[15270], 99.95th=[16712], 00:32:10.088 | 99.99th=[17957] 00:32:10.088 bw ( KiB/s): min=25648, max=26048, per=100.00%, avg=25910.00, stdev=179.76, samples=4 00:32:10.088 iops : min= 6412, max= 6512, avg=6477.50, stdev=44.94, samples=4 00:32:10.088 lat (msec) : 2=0.01%, 4=0.08%, 10=57.63%, 20=42.28% 00:32:10.088 cpu : usr=67.83%, sys=30.58%, ctx=65, majf=0, minf=1544 00:32:10.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:10.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:10.088 issued rwts: total=13000,13012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:10.088 00:32:10.088 Run status group 0 (all jobs): 00:32:10.088 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.8MiB (53.2MB), run=2009-2009msec 00:32:10.088 WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.8MiB (53.3MB), run=2009-2009msec 00:32:10.347 ----------------------------------------------------- 00:32:10.347 Suppressions used: 00:32:10.347 count bytes template 00:32:10.347 1 57 /usr/src/fio/parse.c 00:32:10.347 1 8 libtcmalloc_minimal.so 00:32:10.347 ----------------------------------------------------- 00:32:10.347 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:10.347 20:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:10.605 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:10.605 fio-3.35 00:32:10.605 Starting 1 thread 00:32:13.133 00:32:13.133 test: (groupid=0, jobs=1): err= 0: pid=3106445: Sun Oct 13 20:02:02 2024 00:32:13.133 read: IOPS=6279, BW=98.1MiB/s (103MB/s)(197MiB/2010msec) 00:32:13.133 slat (usec): min=3, max=104, avg= 5.06, stdev= 2.02 00:32:13.133 clat (usec): min=2859, max=22078, avg=11660.08, stdev=2669.15 00:32:13.133 lat (usec): min=2864, max=22084, avg=11665.14, stdev=2669.16 00:32:13.133 clat percentiles (usec): 00:32:13.133 | 1.00th=[ 6325], 5.00th=[ 7373], 10.00th=[ 8356], 20.00th=[ 9372], 00:32:13.133 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11600], 60.00th=[12125], 00:32:13.133 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15139], 95.00th=[16319], 00:32:13.133 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20317], 99.95th=[20579], 00:32:13.133 | 99.99th=[21890] 00:32:13.133 bw ( KiB/s): min=44064, max=55520, per=49.62%, avg=49848.00, stdev=5389.78, samples=4 00:32:13.133 iops : min= 2754, max= 3470, avg=3115.50, stdev=336.86, samples=4 00:32:13.133 write: IOPS=3588, BW=56.1MiB/s (58.8MB/s)(103MiB/1831msec); 0 zone resets 00:32:13.133 slat (usec): min=32, max=146, avg=36.49, stdev= 5.66 00:32:13.133 clat (usec): min=8652, max=28650, avg=15605.99, stdev=2756.32 00:32:13.133 lat (usec): min=8687, max=28700, avg=15642.48, stdev=2756.24 00:32:13.133 clat percentiles (usec): 00:32:13.133 | 1.00th=[10290], 5.00th=[11600], 10.00th=[12256], 20.00th=[13173], 00:32:13.133 | 30.00th=[13960], 40.00th=[14746], 50.00th=[15270], 60.00th=[16057], 00:32:13.133 | 70.00th=[16909], 80.00th=[17957], 90.00th=[19268], 95.00th=[20317], 00:32:13.133 | 99.00th=[22938], 99.50th=[23987], 99.90th=[25822], 99.95th=[26346], 00:32:13.133 | 99.99th=[28705] 00:32:13.133 bw ( KiB/s): min=45856, max=57600, per=90.52%, avg=51968.00, stdev=4947.42, samples=4 00:32:13.133 iops : min= 2866, max= 3600, avg=3248.00, stdev=309.21, samples=4 00:32:13.133 lat (msec) : 4=0.10%, 10=18.07%, 20=79.52%, 50=2.30% 00:32:13.133 cpu : usr=79.29%, sys=19.56%, ctx=41, majf=0, minf=2127 00:32:13.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:32:13.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.133 issued rwts: total=12621,6570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.133 00:32:13.133 Run status group 0 (all jobs): 00:32:13.133 READ: bw=98.1MiB/s (103MB/s), 98.1MiB/s-98.1MiB/s (103MB/s-103MB/s), io=197MiB (207MB), run=2010-2010msec 00:32:13.133 WRITE: bw=56.1MiB/s (58.8MB/s), 56.1MiB/s-56.1MiB/s (58.8MB/s-58.8MB/s), io=103MiB (108MB), run=1831-1831msec 00:32:13.133 ----------------------------------------------------- 00:32:13.133 Suppressions used: 00:32:13.133 count bytes template 00:32:13.133 1 57 /usr/src/fio/parse.c 00:32:13.133 234 22464 /usr/src/fio/iolog.c 00:32:13.133 1 8 libtcmalloc_minimal.so 00:32:13.133 ----------------------------------------------------- 00:32:13.133 00:32:13.133 20:02:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:32:13.391 20:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:16.684 Nvme0n1 00:32:16.684 20:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=941ac4ce-fe1a-481e-95e9-d156d38dc8d9 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 941ac4ce-fe1a-481e-95e9-d156d38dc8d9 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=941ac4ce-fe1a-481e-95e9-d156d38dc8d9 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:19.979 { 00:32:19.979 "uuid": "941ac4ce-fe1a-481e-95e9-d156d38dc8d9", 00:32:19.979 "name": "lvs_0", 00:32:19.979 "base_bdev": "Nvme0n1", 00:32:19.979 "total_data_clusters": 930, 00:32:19.979 "free_clusters": 930, 00:32:19.979 "block_size": 512, 00:32:19.979 "cluster_size": 1073741824 00:32:19.979 } 00:32:19.979 ]' 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="941ac4ce-fe1a-481e-95e9-d156d38dc8d9") .free_clusters' 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="941ac4ce-fe1a-481e-95e9-d156d38dc8d9") .cluster_size' 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:32:19.979 952320 00:32:19.979 20:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:20.237 5801cbc6-11a0-4031-861e-bea479ff9568 00:32:20.237 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:20.912 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:20.912 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:21.201 20:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:21.460 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:21.460 fio-3.35 00:32:21.460 Starting 1 thread 00:32:23.994 00:32:23.994 test: (groupid=0, jobs=1): err= 0: pid=3107847: Sun Oct 13 20:02:13 2024 00:32:23.994 read: IOPS=4446, BW=17.4MiB/s (18.2MB/s)(34.9MiB/2010msec) 00:32:23.994 slat (usec): min=2, max=206, avg= 3.86, stdev= 3.26 00:32:23.994 clat (usec): min=1124, max=172778, avg=15585.71, stdev=13109.62 00:32:23.994 lat (usec): min=1129, max=172845, avg=15589.56, stdev=13110.29 00:32:23.994 clat percentiles (msec): 00:32:23.994 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:23.994 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:32:23.994 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:23.994 | 99.00th=[ 21], 99.50th=[ 155], 99.90th=[ 174], 99.95th=[ 174], 00:32:23.994 | 99.99th=[ 174] 00:32:23.994 bw ( KiB/s): min=12784, max=19776, per=99.63%, avg=17720.00, stdev=3303.77, samples=4 00:32:23.994 iops : min= 3196, max= 4944, avg=4430.00, stdev=825.94, samples=4 00:32:23.994 write: IOPS=4445, BW=17.4MiB/s (18.2MB/s)(34.9MiB/2010msec); 0 zone resets 00:32:23.994 slat (usec): min=3, max=160, avg= 3.90, stdev= 2.26 00:32:23.994 clat (usec): min=471, max=170101, avg=13034.86, stdev=12345.96 00:32:23.994 lat (usec): min=476, max=170111, avg=13038.76, stdev=12346.55 00:32:23.994 clat percentiles (msec): 00:32:23.994 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:23.994 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:32:23.994 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:32:23.994 | 99.00th=[ 16], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 171], 00:32:23.994 | 99.99th=[ 171] 00:32:23.994 bw ( KiB/s): min=13352, max=19392, per=99.94%, avg=17770.00, stdev=2947.34, samples=4 00:32:23.994 iops : min= 3338, max= 4848, avg=4442.50, stdev=736.84, samples=4 00:32:23.994 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:23.994 lat (msec) : 2=0.02%, 4=0.06%, 10=1.69%, 20=97.31%, 50=0.18% 00:32:23.994 lat (msec) : 250=0.72% 00:32:23.994 cpu : usr=68.19%, sys=30.61%, ctx=95, majf=0, minf=1541 00:32:23.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:23.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:23.994 issued rwts: total=8937,8935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:23.994 00:32:23.994 Run status group 0 (all jobs): 00:32:23.994 READ: bw=17.4MiB/s (18.2MB/s), 17.4MiB/s-17.4MiB/s (18.2MB/s-18.2MB/s), io=34.9MiB (36.6MB), run=2010-2010msec 00:32:23.994 WRITE: bw=17.4MiB/s (18.2MB/s), 17.4MiB/s-17.4MiB/s (18.2MB/s-18.2MB/s), io=34.9MiB (36.6MB), run=2010-2010msec 00:32:24.253 ----------------------------------------------------- 00:32:24.253 Suppressions used: 00:32:24.253 count bytes template 00:32:24.253 1 58 /usr/src/fio/parse.c 00:32:24.253 1 8 libtcmalloc_minimal.so 00:32:24.253 ----------------------------------------------------- 00:32:24.253 00:32:24.253 20:02:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:24.511 20:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=2c81bca9-1235-4b88-9894-f83d8bcaf488 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 2c81bca9-1235-4b88-9894-f83d8bcaf488 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=2c81bca9-1235-4b88-9894-f83d8bcaf488 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:25.888 { 00:32:25.888 "uuid": "941ac4ce-fe1a-481e-95e9-d156d38dc8d9", 00:32:25.888 "name": "lvs_0", 00:32:25.888 "base_bdev": "Nvme0n1", 00:32:25.888 "total_data_clusters": 930, 00:32:25.888 "free_clusters": 0, 00:32:25.888 "block_size": 512, 00:32:25.888 "cluster_size": 1073741824 00:32:25.888 }, 00:32:25.888 { 00:32:25.888 "uuid": "2c81bca9-1235-4b88-9894-f83d8bcaf488", 00:32:25.888 "name": "lvs_n_0", 00:32:25.888 "base_bdev": "5801cbc6-11a0-4031-861e-bea479ff9568", 00:32:25.888 "total_data_clusters": 237847, 00:32:25.888 "free_clusters": 237847, 00:32:25.888 "block_size": 512, 00:32:25.888 "cluster_size": 4194304 00:32:25.888 } 00:32:25.888 ]' 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2c81bca9-1235-4b88-9894-f83d8bcaf488") .free_clusters' 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2c81bca9-1235-4b88-9894-f83d8bcaf488") .cluster_size' 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:32:25.888 951388 00:32:25.888 20:02:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:27.264 5985171e-22a9-4d64-a124-b83ecd5eeb63 00:32:27.264 20:02:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:27.522 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:27.780 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:28.038 20:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.307 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:28.307 fio-3.35 00:32:28.307 Starting 1 thread 00:32:30.840 00:32:30.840 test: (groupid=0, jobs=1): err= 0: pid=3108699: Sun Oct 13 20:02:20 2024 00:32:30.840 read: IOPS=4297, BW=16.8MiB/s (17.6MB/s)(33.8MiB/2012msec) 00:32:30.840 slat (usec): min=2, max=155, avg= 3.83, stdev= 2.61 00:32:30.840 clat (usec): min=6085, max=26490, avg=16109.50, stdev=1592.63 00:32:30.840 lat (usec): min=6091, max=26494, avg=16113.33, stdev=1592.49 00:32:30.840 clat percentiles (usec): 00:32:30.840 | 1.00th=[12780], 5.00th=[13698], 10.00th=[14222], 20.00th=[14877], 00:32:30.840 | 30.00th=[15270], 40.00th=[15664], 50.00th=[16057], 60.00th=[16450], 00:32:30.840 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18744], 00:32:30.840 | 99.00th=[19792], 99.50th=[20841], 99.90th=[24773], 99.95th=[26084], 00:32:30.840 | 99.99th=[26608] 00:32:30.840 bw ( KiB/s): min=16152, max=17784, per=99.73%, avg=17142.00, stdev=712.63, samples=4 00:32:30.840 iops : min= 4038, max= 4446, avg=4285.50, stdev=178.16, samples=4 00:32:30.840 write: IOPS=4298, BW=16.8MiB/s (17.6MB/s)(33.8MiB/2012msec); 0 zone resets 00:32:30.840 slat (usec): min=3, max=109, avg= 3.92, stdev= 1.91 00:32:30.840 clat (usec): min=2900, max=24736, avg=13385.67, stdev=1286.55 00:32:30.840 lat (usec): min=2909, max=24740, avg=13389.60, stdev=1286.50 00:32:30.840 clat percentiles (usec): 00:32:30.840 | 1.00th=[10552], 5.00th=[11469], 10.00th=[11863], 20.00th=[12518], 00:32:30.840 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13698], 00:32:30.840 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:32:30.840 | 99.00th=[16188], 99.50th=[16909], 99.90th=[22414], 99.95th=[22938], 00:32:30.840 | 99.99th=[24773] 00:32:30.840 bw ( KiB/s): min=16960, max=17472, per=99.98%, avg=17190.00, stdev=211.40, samples=4 00:32:30.840 iops : min= 4240, max= 4368, avg=4297.50, stdev=52.85, samples=4 00:32:30.840 lat (msec) : 4=0.02%, 10=0.36%, 20=99.12%, 50=0.50% 00:32:30.840 cpu : usr=65.54%, sys=33.17%, ctx=86, majf=0, minf=1540 00:32:30.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:30.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:30.840 issued rwts: total=8646,8648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:30.840 00:32:30.840 Run status group 0 (all jobs): 00:32:30.840 READ: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=33.8MiB (35.4MB), run=2012-2012msec 00:32:30.840 WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=33.8MiB (35.4MB), run=2012-2012msec 00:32:30.840 ----------------------------------------------------- 00:32:30.840 Suppressions used: 00:32:30.840 count bytes template 00:32:30.840 1 58 /usr/src/fio/parse.c 00:32:30.840 1 8 libtcmalloc_minimal.so 00:32:30.840 ----------------------------------------------------- 00:32:30.840 00:32:30.840 20:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:31.098 20:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:31.098 20:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:36.374 20:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:36.374 20:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:38.910 20:02:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:38.910 20:02:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:40.816 rmmod nvme_tcp 00:32:40.816 rmmod nvme_fabrics 00:32:40.816 rmmod nvme_keyring 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3105637 ']' 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3105637 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3105637 ']' 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3105637 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:40.816 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3105637 00:32:41.074 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:41.074 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:41.074 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3105637' 00:32:41.074 killing process with pid 3105637 00:32:41.074 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3105637 00:32:41.074 20:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3105637 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.457 20:02:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.367 20:02:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.367 00:32:44.367 real 0m42.247s 00:32:44.367 user 2m41.414s 00:32:44.367 sys 0m8.092s 00:32:44.367 20:02:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:44.367 20:02:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.367 ************************************ 00:32:44.367 END TEST nvmf_fio_host 00:32:44.367 ************************************ 00:32:44.367 20:02:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:44.367 20:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:44.367 20:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:44.367 20:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.367 ************************************ 00:32:44.367 START TEST nvmf_failover 00:32:44.367 ************************************ 00:32:44.367 20:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:44.367 * Looking for test storage... 00:32:44.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.367 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:44.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.368 --rc genhtml_branch_coverage=1 00:32:44.368 --rc genhtml_function_coverage=1 00:32:44.368 --rc genhtml_legend=1 00:32:44.368 --rc geninfo_all_blocks=1 00:32:44.368 --rc geninfo_unexecuted_blocks=1 00:32:44.368 00:32:44.368 ' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:44.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.368 --rc genhtml_branch_coverage=1 00:32:44.368 --rc genhtml_function_coverage=1 00:32:44.368 --rc genhtml_legend=1 00:32:44.368 --rc geninfo_all_blocks=1 00:32:44.368 --rc geninfo_unexecuted_blocks=1 00:32:44.368 00:32:44.368 ' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:44.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.368 --rc genhtml_branch_coverage=1 00:32:44.368 --rc genhtml_function_coverage=1 00:32:44.368 --rc genhtml_legend=1 00:32:44.368 --rc geninfo_all_blocks=1 00:32:44.368 --rc geninfo_unexecuted_blocks=1 00:32:44.368 00:32:44.368 ' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:44.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.368 --rc genhtml_branch_coverage=1 00:32:44.368 --rc genhtml_function_coverage=1 00:32:44.368 --rc genhtml_legend=1 00:32:44.368 --rc geninfo_all_blocks=1 00:32:44.368 --rc geninfo_unexecuted_blocks=1 00:32:44.368 00:32:44.368 ' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:44.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.368 20:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:46.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.902 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:46.903 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:46.903 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:46.903 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:32:46.903 00:32:46.903 --- 10.0.0.2 ping statistics --- 00:32:46.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.903 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:32:46.903 00:32:46.903 --- 10.0.0.1 ping statistics --- 00:32:46.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.903 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3112211 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3112211 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3112211 ']' 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:46.903 20:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:46.903 [2024-10-13 20:02:36.427344] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:32:46.903 [2024-10-13 20:02:36.427524] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.903 [2024-10-13 20:02:36.573410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:46.903 [2024-10-13 20:02:36.715074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.903 [2024-10-13 20:02:36.715158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.903 [2024-10-13 20:02:36.715184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.903 [2024-10-13 20:02:36.715207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.903 [2024-10-13 20:02:36.715226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.162 [2024-10-13 20:02:36.717893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.162 [2024-10-13 20:02:36.718018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.162 [2024-10-13 20:02:36.718022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:47.728 20:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.728 20:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:47.728 20:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:47.728 20:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:47.728 20:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:47.728 20:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.728 20:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:47.986 [2024-10-13 20:02:37.676994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.986 20:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:48.288 Malloc0 00:32:48.288 20:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.545 20:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.803 20:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.062 [2024-10-13 20:02:38.845150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.062 20:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:49.321 [2024-10-13 20:02:39.110116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:49.321 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:49.580 [2024-10-13 20:02:39.391154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3112626 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3112626 /var/tmp/bdevperf.sock 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3112626 ']' 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.839 20:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:50.776 20:02:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.776 20:02:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:50.776 20:02:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:51.344 NVMe0n1 00:32:51.344 20:02:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:51.602 00:32:51.602 20:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3112777 00:32:51.602 20:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:51.602 20:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:52.537 20:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.796 [2024-10-13 20:02:42.587697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.587990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 [2024-10-13 20:02:42.588314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:52.796 20:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:56.083 20:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:56.341 00:32:56.341 20:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:56.601 [2024-10-13 20:02:46.351866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.351949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.351986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 [2024-10-13 20:02:46.352220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:56.601 20:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:59.892 20:02:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.892 [2024-10-13 20:02:49.645312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.892 20:02:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:00.903 20:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:01.163 [2024-10-13 20:02:50.926771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.926865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.926887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.926906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.926923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.926939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.926956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.926973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.926990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.163 [2024-10-13 20:02:50.927369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 [2024-10-13 20:02:50.927704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:01.164 20:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3112777 00:33:07.735 { 00:33:07.735 "results": [ 00:33:07.735 { 00:33:07.735 "job": "NVMe0n1", 00:33:07.735 "core_mask": "0x1", 00:33:07.735 "workload": "verify", 00:33:07.735 "status": "finished", 00:33:07.735 "verify_range": { 00:33:07.735 "start": 0, 00:33:07.735 "length": 16384 00:33:07.735 }, 00:33:07.735 "queue_depth": 128, 00:33:07.735 "io_size": 4096, 00:33:07.735 "runtime": 15.015814, 00:33:07.735 "iops": 6013.79319163117, 00:33:07.735 "mibps": 23.491379654809258, 00:33:07.735 "io_failed": 13476, 00:33:07.735 "io_timeout": 0, 00:33:07.735 "avg_latency_us": 18486.144969454028, 00:33:07.735 "min_latency_us": 1080.1303703703704, 00:33:07.735 "max_latency_us": 20777.33925925926 00:33:07.735 } 00:33:07.735 ], 00:33:07.735 "core_count": 1 00:33:07.735 } 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3112626 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3112626 ']' 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3112626 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3112626 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3112626' 00:33:07.735 killing process with pid 3112626 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3112626 00:33:07.735 20:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3112626 00:33:07.735 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:07.735 [2024-10-13 20:02:39.497261] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:33:07.735 [2024-10-13 20:02:39.497425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112626 ] 00:33:07.735 [2024-10-13 20:02:39.624046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.735 [2024-10-13 20:02:39.749351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.735 Running I/O for 15 seconds... 00:33:07.735 6181.00 IOPS, 24.14 MiB/s [2024-10-13T18:02:57.550Z] [2024-10-13 20:02:42.589489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.589593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.589642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.589700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.589745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.589806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.589850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.589903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.589955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.589977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.590971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.590991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.591034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.736 [2024-10-13 20:02:42.591075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.736 [2024-10-13 20:02:42.591537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.736 [2024-10-13 20:02:42.591560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.591581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.591624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.591668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.591722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.591789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.737 [2024-10-13 20:02:42.591831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.591873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.591926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.591968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.591990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.592966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.592986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.737 [2024-10-13 20:02:42.593536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.737 [2024-10-13 20:02:42.593559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.593978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.593999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.594043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.738 [2024-10-13 20:02:42.594087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.594958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.594979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.738 [2024-10-13 20:02:42.595486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.738 [2024-10-13 20:02:42.595510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.739 [2024-10-13 20:02:42.595531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:42.595580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.739 [2024-10-13 20:02:42.595612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.739 [2024-10-13 20:02:42.595632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57728 len:8 PRP1 0x0 PRP2 0x0 00:33:07.739 [2024-10-13 20:02:42.595654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:42.595943] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:33:07.739 [2024-10-13 20:02:42.595978] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:07.739 [2024-10-13 20:02:42.596042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.739 [2024-10-13 20:02:42.596068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:42.596100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.739 [2024-10-13 20:02:42.596121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:42.596142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.739 [2024-10-13 20:02:42.596162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:42.596184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.739 [2024-10-13 20:02:42.596203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:42.596223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.739 [2024-10-13 20:02:42.596297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:07.739 [2024-10-13 20:02:42.600204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.739 [2024-10-13 20:02:42.766824] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:07.739 5635.00 IOPS, 22.01 MiB/s [2024-10-13T18:02:57.554Z] 5857.67 IOPS, 22.88 MiB/s [2024-10-13T18:02:57.554Z] 6007.00 IOPS, 23.46 MiB/s [2024-10-13T18:02:57.554Z] [2024-10-13 20:02:46.352364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.352970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.352991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.739 [2024-10-13 20:02:46.353717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.739 [2024-10-13 20:02:46.353740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.353765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.353787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.353810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.353831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.353854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.353876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.353899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.353920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.353943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.353965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.353988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.354963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.354987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.740 [2024-10-13 20:02:46.355627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.740 [2024-10-13 20:02:46.355650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.355671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.355694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.355716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.355739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.355760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.355784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.355805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.355828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.355849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.355873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.355893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.355916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.355947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.355972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.355994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.741 [2024-10-13 20:02:46.356039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.356961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.356985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.741 [2024-10-13 20:02:46.357575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.741 [2024-10-13 20:02:46.357595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.357977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.357998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.358042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.358085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.358129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.358173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.358216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.358259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:46.358303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.742 [2024-10-13 20:02:46.358377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.742 [2024-10-13 20:02:46.358406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20776 len:8 PRP1 0x0 PRP2 0x0 00:33:07.742 [2024-10-13 20:02:46.358429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358725] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:33:07.742 [2024-10-13 20:02:46.358755] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:07.742 [2024-10-13 20:02:46.358810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.742 [2024-10-13 20:02:46.358835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.742 [2024-10-13 20:02:46.358878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.742 [2024-10-13 20:02:46.358919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.742 [2024-10-13 20:02:46.358959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:46.358978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.742 [2024-10-13 20:02:46.359047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:07.742 [2024-10-13 20:02:46.362923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.742 5935.80 IOPS, 23.19 MiB/s [2024-10-13T18:02:57.557Z] [2024-10-13 20:02:46.531007] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:07.742 5884.50 IOPS, 22.99 MiB/s [2024-10-13T18:02:57.557Z] 5920.14 IOPS, 23.13 MiB/s [2024-10-13T18:02:57.557Z] 5944.62 IOPS, 23.22 MiB/s [2024-10-13T18:02:57.557Z] 5954.89 IOPS, 23.26 MiB/s [2024-10-13T18:02:57.557Z] [2024-10-13 20:02:50.929523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.929585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.929632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.929657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.929681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.929703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.929728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.929750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.929773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.929795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.929824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.929847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.929870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.929891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.929914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.929935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.929958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.742 [2024-10-13 20:02:50.929980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.930003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.742 [2024-10-13 20:02:50.930024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.930047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.742 [2024-10-13 20:02:50.930069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.930092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.742 [2024-10-13 20:02:50.930113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.930136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.742 [2024-10-13 20:02:50.930157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.930179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.742 [2024-10-13 20:02:50.930200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.930224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.742 [2024-10-13 20:02:50.930245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.930268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.742 [2024-10-13 20:02:50.930288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.742 [2024-10-13 20:02:50.930312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.742 [2024-10-13 20:02:50.930333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.930967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.930990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.743 [2024-10-13 20:02:50.931410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.931965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.931985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.932008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.932028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.932052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.932077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.932101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.932122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.932168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.743 [2024-10-13 20:02:50.932191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.743 [2024-10-13 20:02:50.932214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.932974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.932997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-10-13 20:02:50.933065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-10-13 20:02:50.933109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-10-13 20:02:50.933153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-10-13 20:02:50.933198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-10-13 20:02:50.933246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-10-13 20:02:50.933292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-10-13 20:02:50.933336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.744 [2024-10-13 20:02:50.933884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.744 [2024-10-13 20:02:50.933906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.933927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.933951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.933972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.933995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.745 [2024-10-13 20:02:50.934785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.934868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17192 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.934890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.934937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.934956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17200 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.934976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.934997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17208 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17224 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17232 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17240 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17256 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17264 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17272 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.935750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:07.745 [2024-10-13 20:02:50.935766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:07.745 [2024-10-13 20:02:50.935783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17288 len:8 PRP1 0x0 PRP2 0x0 00:33:07.745 [2024-10-13 20:02:50.935802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.745 [2024-10-13 20:02:50.936089] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:33:07.745 [2024-10-13 20:02:50.936120] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:07.745 [2024-10-13 20:02:50.936172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.746 [2024-10-13 20:02:50.936197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.746 [2024-10-13 20:02:50.936220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.746 [2024-10-13 20:02:50.936240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.746 [2024-10-13 20:02:50.936261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.746 [2024-10-13 20:02:50.936281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.746 [2024-10-13 20:02:50.936302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:07.746 [2024-10-13 20:02:50.936321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:07.746 [2024-10-13 20:02:50.936341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.746 [2024-10-13 20:02:50.936431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:07.746 [2024-10-13 20:02:50.940175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.746 [2024-10-13 20:02:51.015387] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:07.746 5926.00 IOPS, 23.15 MiB/s [2024-10-13T18:02:57.561Z] 5948.00 IOPS, 23.23 MiB/s [2024-10-13T18:02:57.561Z] 5966.33 IOPS, 23.31 MiB/s [2024-10-13T18:02:57.561Z] 5987.15 IOPS, 23.39 MiB/s [2024-10-13T18:02:57.561Z] 6004.00 IOPS, 23.45 MiB/s [2024-10-13T18:02:57.561Z] 6011.60 IOPS, 23.48 MiB/s 00:33:07.746 Latency(us) 00:33:07.746 [2024-10-13T18:02:57.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.746 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:07.746 Verification LBA range: start 0x0 length 0x4000 00:33:07.746 NVMe0n1 : 15.02 6013.79 23.49 897.45 0.00 18486.14 1080.13 20777.34 00:33:07.746 [2024-10-13T18:02:57.561Z] =================================================================================================================== 00:33:07.746 [2024-10-13T18:02:57.561Z] Total : 6013.79 23.49 897.45 0.00 18486.14 1080.13 20777.34 00:33:07.746 Received shutdown signal, test time was about 15.000000 seconds 00:33:07.746 00:33:07.746 Latency(us) 00:33:07.746 [2024-10-13T18:02:57.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.746 [2024-10-13T18:02:57.561Z] =================================================================================================================== 00:33:07.746 [2024-10-13T18:02:57.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3114676 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3114676 /var/tmp/bdevperf.sock 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3114676 ']' 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:07.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.746 20:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:08.680 20:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.680 20:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:08.680 20:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:08.938 [2024-10-13 20:02:58.666251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:08.938 20:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:09.196 [2024-10-13 20:02:58.927088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:09.196 20:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:09.765 NVMe0n1 00:33:09.765 20:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:10.023 00:33:10.281 20:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:10.539 00:33:10.539 20:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:10.539 20:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:10.797 20:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:11.365 20:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:14.652 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:14.652 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:14.652 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3115652 00:33:14.652 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:14.652 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3115652 00:33:15.587 { 00:33:15.587 "results": [ 00:33:15.587 { 00:33:15.587 "job": "NVMe0n1", 00:33:15.587 "core_mask": "0x1", 00:33:15.587 "workload": "verify", 00:33:15.587 "status": "finished", 00:33:15.587 "verify_range": { 00:33:15.587 "start": 0, 00:33:15.587 "length": 16384 00:33:15.587 }, 00:33:15.587 "queue_depth": 128, 00:33:15.587 "io_size": 4096, 00:33:15.587 "runtime": 1.017528, 00:33:15.587 "iops": 6315.305328207184, 00:33:15.587 "mibps": 24.669161438309313, 00:33:15.587 "io_failed": 0, 00:33:15.587 "io_timeout": 0, 00:33:15.587 "avg_latency_us": 20177.4179211767, 00:33:15.587 "min_latency_us": 3980.705185185185, 00:33:15.587 "max_latency_us": 18058.80888888889 00:33:15.587 } 00:33:15.587 ], 00:33:15.587 "core_count": 1 00:33:15.587 } 00:33:15.587 20:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:15.587 [2024-10-13 20:02:57.449016] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:33:15.587 [2024-10-13 20:02:57.449166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3114676 ] 00:33:15.587 [2024-10-13 20:02:57.576851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.587 [2024-10-13 20:02:57.701890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.587 [2024-10-13 20:03:00.871294] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:15.587 [2024-10-13 20:03:00.871451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.587 [2024-10-13 20:03:00.871486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.587 [2024-10-13 20:03:00.871524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.587 [2024-10-13 20:03:00.871545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.587 [2024-10-13 20:03:00.871568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.587 [2024-10-13 20:03:00.871588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.587 [2024-10-13 20:03:00.871609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.587 [2024-10-13 20:03:00.871630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.587 [2024-10-13 20:03:00.871649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:15.587 [2024-10-13 20:03:00.871757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:15.587 [2024-10-13 20:03:00.871811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:15.587 [2024-10-13 20:03:01.005615] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:15.587 Running I/O for 1 seconds... 00:33:15.587 6296.00 IOPS, 24.59 MiB/s 00:33:15.587 Latency(us) 00:33:15.587 [2024-10-13T18:03:05.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.587 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:15.587 Verification LBA range: start 0x0 length 0x4000 00:33:15.587 NVMe0n1 : 1.02 6315.31 24.67 0.00 0.00 20177.42 3980.71 18058.81 00:33:15.587 [2024-10-13T18:03:05.402Z] =================================================================================================================== 00:33:15.587 [2024-10-13T18:03:05.402Z] Total : 6315.31 24.67 0.00 0.00 20177.42 3980.71 18058.81 00:33:15.587 20:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:15.587 20:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:15.845 20:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:16.104 20:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:16.104 20:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:16.362 20:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:16.620 20:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:19.904 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:19.904 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:19.904 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3114676 00:33:19.904 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3114676 ']' 00:33:19.904 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3114676 00:33:19.904 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:19.904 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:19.904 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3114676 00:33:20.164 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:20.164 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:20.164 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3114676' 00:33:20.164 killing process with pid 3114676 00:33:20.164 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3114676 00:33:20.164 20:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3114676 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.103 rmmod nvme_tcp 00:33:21.103 rmmod nvme_fabrics 00:33:21.103 rmmod nvme_keyring 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3112211 ']' 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3112211 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3112211 ']' 00:33:21.103 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3112211 00:33:21.361 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:21.361 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:21.361 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3112211 00:33:21.361 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:21.361 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:21.361 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3112211' 00:33:21.361 killing process with pid 3112211 00:33:21.361 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3112211 00:33:21.361 20:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3112211 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.742 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.653 20:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.654 00:33:24.654 real 0m40.261s 00:33:24.654 user 2m21.338s 00:33:24.654 sys 0m6.347s 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:24.654 ************************************ 00:33:24.654 END TEST nvmf_failover 00:33:24.654 ************************************ 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.654 ************************************ 00:33:24.654 START TEST nvmf_host_discovery 00:33:24.654 ************************************ 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:24.654 * Looking for test storage... 00:33:24.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:24.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.654 --rc genhtml_branch_coverage=1 00:33:24.654 --rc genhtml_function_coverage=1 00:33:24.654 --rc genhtml_legend=1 00:33:24.654 --rc geninfo_all_blocks=1 00:33:24.654 --rc geninfo_unexecuted_blocks=1 00:33:24.654 00:33:24.654 ' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:24.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.654 --rc genhtml_branch_coverage=1 00:33:24.654 --rc genhtml_function_coverage=1 00:33:24.654 --rc genhtml_legend=1 00:33:24.654 --rc geninfo_all_blocks=1 00:33:24.654 --rc geninfo_unexecuted_blocks=1 00:33:24.654 00:33:24.654 ' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:24.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.654 --rc genhtml_branch_coverage=1 00:33:24.654 --rc genhtml_function_coverage=1 00:33:24.654 --rc genhtml_legend=1 00:33:24.654 --rc geninfo_all_blocks=1 00:33:24.654 --rc geninfo_unexecuted_blocks=1 00:33:24.654 00:33:24.654 ' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:24.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.654 --rc genhtml_branch_coverage=1 00:33:24.654 --rc genhtml_function_coverage=1 00:33:24.654 --rc genhtml_legend=1 00:33:24.654 --rc geninfo_all_blocks=1 00:33:24.654 --rc geninfo_unexecuted_blocks=1 00:33:24.654 00:33:24.654 ' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.654 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:24.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.655 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.913 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:24.913 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:24.913 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.913 20:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:26.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:26.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:26.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:26.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.819 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:27.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:33:27.080 00:33:27.080 --- 10.0.0.2 ping statistics --- 00:33:27.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.080 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:33:27.080 00:33:27.080 --- 10.0.0.1 ping statistics --- 00:33:27.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.080 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=3119012 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 3119012 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3119012 ']' 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:27.080 20:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:27.080 [2024-10-13 20:03:16.823498] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:33:27.080 [2024-10-13 20:03:16.823656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.340 [2024-10-13 20:03:16.965153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.340 [2024-10-13 20:03:17.081613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.340 [2024-10-13 20:03:17.081710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.340 [2024-10-13 20:03:17.081733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.340 [2024-10-13 20:03:17.081752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.340 [2024-10-13 20:03:17.081768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.340 [2024-10-13 20:03:17.083156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:28.283 [2024-10-13 20:03:17.852107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:28.283 [2024-10-13 20:03:17.860281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:28.283 null0 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:28.283 null1 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3119165 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3119165 /tmp/host.sock 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3119165 ']' 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:28.283 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:28.283 20:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:28.283 [2024-10-13 20:03:17.973722] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:33:28.283 [2024-10-13 20:03:17.973863] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119165 ] 00:33:28.546 [2024-10-13 20:03:18.100011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.546 [2024-10-13 20:03:18.218278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:29.483 20:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 [2024-10-13 20:03:19.248235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:29.483 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:33:29.743 20:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:30.313 [2024-10-13 20:03:19.992997] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:30.313 [2024-10-13 20:03:19.993058] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:30.313 [2024-10-13 20:03:19.993100] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:30.313 [2024-10-13 20:03:20.079433] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:30.571 [2024-10-13 20:03:20.308655] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:30.571 [2024-10-13 20:03:20.308717] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.829 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:30.830 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.091 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:31.091 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:31.091 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.092 [2024-10-13 20:03:20.688522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:31.092 [2024-10-13 20:03:20.688929] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:31.092 [2024-10-13 20:03:20.688993] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.092 [2024-10-13 20:03:20.774852] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:31.092 20:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:31.092 [2024-10-13 20:03:20.877194] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:31.092 [2024-10-13 20:03:20.877233] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:31.092 [2024-10-13 20:03:20.877252] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:32.030 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.030 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:32.031 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:32.031 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:32.031 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:32.031 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.031 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.031 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:32.031 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:32.031 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:32.290 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.291 [2024-10-13 20:03:21.905618] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:32.291 [2024-10-13 20:03:21.905685] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:32.291 [2024-10-13 20:03:21.909886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.291 [2024-10-13 20:03:21.909939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.291 [2024-10-13 20:03:21.909974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.291 [2024-10-13 20:03:21.910000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.291 [2024-10-13 20:03:21.910025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.291 [2024-10-13 20:03:21.910049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.291 [2024-10-13 20:03:21.910072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.291 [2024-10-13 20:03:21.910095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.291 [2024-10-13 20:03:21.910118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:32.291 [2024-10-13 20:03:21.919880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.291 [2024-10-13 20:03:21.929950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:32.291 [2024-10-13 20:03:21.930202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.291 [2024-10-13 20:03:21.930244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:32.291 [2024-10-13 20:03:21.930271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:32.291 [2024-10-13 20:03:21.930308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:32.291 [2024-10-13 20:03:21.930350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.291 [2024-10-13 20:03:21.930384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:32.291 [2024-10-13 20:03:21.930436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.291 [2024-10-13 20:03:21.930475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.291 [2024-10-13 20:03:21.940070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:32.291 [2024-10-13 20:03:21.940266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.291 [2024-10-13 20:03:21.940303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:32.291 [2024-10-13 20:03:21.940327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:32.291 [2024-10-13 20:03:21.940359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:32.291 [2024-10-13 20:03:21.940411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.291 [2024-10-13 20:03:21.940435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:32.291 [2024-10-13 20:03:21.940454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.291 [2024-10-13 20:03:21.940484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:32.291 [2024-10-13 20:03:21.950167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:32.291 [2024-10-13 20:03:21.950350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.291 [2024-10-13 20:03:21.950402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:32.291 [2024-10-13 20:03:21.950449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:32.291 [2024-10-13 20:03:21.950483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:32.291 [2024-10-13 20:03:21.950514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.291 [2024-10-13 20:03:21.950536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:32.291 [2024-10-13 20:03:21.950555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.291 [2024-10-13 20:03:21.950586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:32.291 [2024-10-13 20:03:21.960275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:32.291 [2024-10-13 20:03:21.960481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.291 [2024-10-13 20:03:21.960520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:32.291 [2024-10-13 20:03:21.960546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:32.291 [2024-10-13 20:03:21.960579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:32.291 [2024-10-13 20:03:21.960610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.291 [2024-10-13 20:03:21.960632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:32.291 [2024-10-13 20:03:21.960651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.291 [2024-10-13 20:03:21.960682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.291 [2024-10-13 20:03:21.970390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:32.291 [2024-10-13 20:03:21.970607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.291 [2024-10-13 20:03:21.970644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:32.291 [2024-10-13 20:03:21.970669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:32.291 [2024-10-13 20:03:21.970713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:32.291 [2024-10-13 20:03:21.970759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.291 [2024-10-13 20:03:21.970784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:32.291 [2024-10-13 20:03:21.970804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.291 [2024-10-13 20:03:21.970833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.291 [2024-10-13 20:03:21.980520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:32.291 [2024-10-13 20:03:21.980710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.291 [2024-10-13 20:03:21.980747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:32.291 [2024-10-13 20:03:21.980772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:32.291 [2024-10-13 20:03:21.980804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:32.291 [2024-10-13 20:03:21.980835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.291 [2024-10-13 20:03:21.980857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:32.291 [2024-10-13 20:03:21.980876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.291 [2024-10-13 20:03:21.980907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.291 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:32.291 [2024-10-13 20:03:21.990626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:32.292 [2024-10-13 20:03:21.990889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.292 [2024-10-13 20:03:21.990924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:32.292 [2024-10-13 20:03:21.990947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:32.292 [2024-10-13 20:03:21.990978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:32.292 [2024-10-13 20:03:21.991007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:32.292 [2024-10-13 20:03:21.991028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:32.292 [2024-10-13 20:03:21.991047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.292 [2024-10-13 20:03:21.991075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:32.292 [2024-10-13 20:03:21.992276] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:32.292 [2024-10-13 20:03:21.992327] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:32.292 20:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:32.292 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.551 20:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.492 [2024-10-13 20:03:23.221701] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:33.492 [2024-10-13 20:03:23.221744] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:33.492 [2024-10-13 20:03:23.221800] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:33.751 [2024-10-13 20:03:23.309120] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:33.751 [2024-10-13 20:03:23.378456] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:33.751 [2024-10-13 20:03:23.378510] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.751 request: 00:33:33.751 { 00:33:33.751 "name": "nvme", 00:33:33.751 "trtype": "tcp", 00:33:33.751 "traddr": "10.0.0.2", 00:33:33.751 "adrfam": "ipv4", 00:33:33.751 "trsvcid": "8009", 00:33:33.751 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:33.751 "wait_for_attach": true, 00:33:33.751 "method": "bdev_nvme_start_discovery", 00:33:33.751 "req_id": 1 00:33:33.751 } 00:33:33.751 Got JSON-RPC error response 00:33:33.751 response: 00:33:33.751 { 00:33:33.751 "code": -17, 00:33:33.751 "message": "File exists" 00:33:33.751 } 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.751 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.752 request: 00:33:33.752 { 00:33:33.752 "name": "nvme_second", 00:33:33.752 "trtype": "tcp", 00:33:33.752 "traddr": "10.0.0.2", 00:33:33.752 "adrfam": "ipv4", 00:33:33.752 "trsvcid": "8009", 00:33:33.752 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:33.752 "wait_for_attach": true, 00:33:33.752 "method": "bdev_nvme_start_discovery", 00:33:33.752 "req_id": 1 00:33:33.752 } 00:33:33.752 Got JSON-RPC error response 00:33:33.752 response: 00:33:33.752 { 00:33:33.752 "code": -17, 00:33:33.752 "message": "File exists" 00:33:33.752 } 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:33.752 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.013 20:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.949 [2024-10-13 20:03:24.574370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.949 [2024-10-13 20:03:24.574487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:33:34.949 [2024-10-13 20:03:24.574567] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:34.949 [2024-10-13 20:03:24.574592] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:34.949 [2024-10-13 20:03:24.574623] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:35.887 [2024-10-13 20:03:25.576766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.887 [2024-10-13 20:03:25.576862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:33:35.887 [2024-10-13 20:03:25.576966] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:35.887 [2024-10-13 20:03:25.576995] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:35.887 [2024-10-13 20:03:25.577019] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:36.823 [2024-10-13 20:03:26.578791] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:36.823 request: 00:33:36.823 { 00:33:36.823 "name": "nvme_second", 00:33:36.823 "trtype": "tcp", 00:33:36.823 "traddr": "10.0.0.2", 00:33:36.823 "adrfam": "ipv4", 00:33:36.823 "trsvcid": "8010", 00:33:36.823 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:36.823 "wait_for_attach": false, 00:33:36.823 "attach_timeout_ms": 3000, 00:33:36.823 "method": "bdev_nvme_start_discovery", 00:33:36.823 "req_id": 1 00:33:36.823 } 00:33:36.823 Got JSON-RPC error response 00:33:36.823 response: 00:33:36.823 { 00:33:36.823 "code": -110, 00:33:36.823 "message": "Connection timed out" 00:33:36.823 } 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3119165 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.823 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.823 rmmod nvme_tcp 00:33:37.082 rmmod nvme_fabrics 00:33:37.082 rmmod nvme_keyring 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 3119012 ']' 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 3119012 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3119012 ']' 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3119012 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3119012 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3119012' 00:33:37.082 killing process with pid 3119012 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3119012 00:33:37.082 20:03:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3119012 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.075 20:03:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.639 20:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.639 00:33:40.639 real 0m15.607s 00:33:40.639 user 0m22.963s 00:33:40.639 sys 0m3.092s 00:33:40.639 20:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:40.639 20:03:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.639 ************************************ 00:33:40.639 END TEST nvmf_host_discovery 00:33:40.639 ************************************ 00:33:40.639 20:03:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:40.639 20:03:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:40.639 20:03:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:40.639 20:03:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.639 ************************************ 00:33:40.639 START TEST nvmf_host_multipath_status 00:33:40.639 ************************************ 00:33:40.639 20:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:40.639 * Looking for test storage... 00:33:40.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:40.639 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:40.639 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.640 --rc genhtml_branch_coverage=1 00:33:40.640 --rc genhtml_function_coverage=1 00:33:40.640 --rc genhtml_legend=1 00:33:40.640 --rc geninfo_all_blocks=1 00:33:40.640 --rc geninfo_unexecuted_blocks=1 00:33:40.640 00:33:40.640 ' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.640 --rc genhtml_branch_coverage=1 00:33:40.640 --rc genhtml_function_coverage=1 00:33:40.640 --rc genhtml_legend=1 00:33:40.640 --rc geninfo_all_blocks=1 00:33:40.640 --rc geninfo_unexecuted_blocks=1 00:33:40.640 00:33:40.640 ' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.640 --rc genhtml_branch_coverage=1 00:33:40.640 --rc genhtml_function_coverage=1 00:33:40.640 --rc genhtml_legend=1 00:33:40.640 --rc geninfo_all_blocks=1 00:33:40.640 --rc geninfo_unexecuted_blocks=1 00:33:40.640 00:33:40.640 ' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.640 --rc genhtml_branch_coverage=1 00:33:40.640 --rc genhtml_function_coverage=1 00:33:40.640 --rc genhtml_legend=1 00:33:40.640 --rc geninfo_all_blocks=1 00:33:40.640 --rc geninfo_unexecuted_blocks=1 00:33:40.640 00:33:40.640 ' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:40.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:40.640 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:40.641 20:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.546 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:42.546 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:42.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:42.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:42.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:42.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:33:42.547 00:33:42.547 --- 10.0.0.2 ping statistics --- 00:33:42.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.547 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:33:42.547 00:33:42.547 --- 10.0.0.1 ping statistics --- 00:33:42.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.547 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3122344 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3122344 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3122344 ']' 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:42.547 20:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:42.808 [2024-10-13 20:03:32.390490] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:33:42.808 [2024-10-13 20:03:32.390621] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.808 [2024-10-13 20:03:32.533380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:43.068 [2024-10-13 20:03:32.677468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.068 [2024-10-13 20:03:32.677549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.068 [2024-10-13 20:03:32.677576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.068 [2024-10-13 20:03:32.677599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.068 [2024-10-13 20:03:32.677619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.068 [2024-10-13 20:03:32.680293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.068 [2024-10-13 20:03:32.680295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.635 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:43.635 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:43.635 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:43.635 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:43.635 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:43.635 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.635 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3122344 00:33:43.635 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:44.204 [2024-10-13 20:03:33.724849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.204 20:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:44.463 Malloc0 00:33:44.463 20:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:44.720 20:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:44.977 20:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.236 [2024-10-13 20:03:35.046607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.494 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:45.752 [2024-10-13 20:03:35.371657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3122762 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3122762 /var/tmp/bdevperf.sock 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3122762 ']' 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:45.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:45.752 20:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:46.686 20:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:46.686 20:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:46.686 20:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:46.944 20:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:47.514 Nvme0n1 00:33:47.514 20:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:48.084 Nvme0n1 00:33:48.084 20:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:48.084 20:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:50.003 20:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:50.003 20:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:50.261 20:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:50.520 20:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:51.900 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:51.900 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:51.900 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.900 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:51.900 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.900 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:51.900 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.900 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:52.159 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:52.159 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:52.159 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.159 20:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:52.417 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.417 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:52.417 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.417 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:52.676 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.676 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:52.676 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.676 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:52.934 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.934 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:52.934 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.934 20:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:53.501 20:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.501 20:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:53.501 20:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:53.501 20:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:53.760 20:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:55.137 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:55.137 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:55.137 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.137 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:55.137 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:55.137 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:55.137 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.137 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:55.395 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.395 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:55.395 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.395 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:55.653 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.653 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:55.653 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.653 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:55.911 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.911 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:55.911 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.911 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:56.170 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.170 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:56.170 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.170 20:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:56.737 20:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.737 20:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:56.737 20:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:56.737 20:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:57.306 20:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:58.243 20:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:58.243 20:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:58.243 20:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.243 20:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:58.501 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.501 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:58.501 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.501 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:58.759 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:58.759 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:58.759 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.759 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:59.018 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.018 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:59.018 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.018 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:59.276 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.276 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:59.276 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.276 20:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:59.534 20:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.534 20:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:59.534 20:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.534 20:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:59.792 20:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.792 20:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:59.792 20:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:00.051 20:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:00.311 20:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:01.691 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:01.691 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:01.691 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.691 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:01.691 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.691 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:01.691 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.691 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:01.949 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:01.949 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:01.949 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.949 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:02.207 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.207 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:02.207 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.207 20:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:02.466 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.466 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:02.466 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.466 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:02.724 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.724 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:02.724 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.724 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:02.982 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:02.982 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:02.982 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:03.240 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:03.499 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:04.878 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:04.878 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:04.878 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.878 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:04.878 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:04.878 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:04.878 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.878 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:05.136 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:05.136 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:05.136 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.136 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:05.394 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.394 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:05.394 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.394 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:05.652 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.652 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:05.652 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.652 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:05.911 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:05.911 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:05.911 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.911 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:06.169 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:06.169 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:06.169 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:06.427 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:06.685 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:08.062 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:08.062 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:08.062 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.062 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:08.062 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:08.062 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:08.062 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.062 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:08.321 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.321 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:08.321 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.321 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:08.578 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.578 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:08.578 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.578 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:08.836 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.836 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:08.836 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.836 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:09.095 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:09.095 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:09.095 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.095 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:09.661 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.661 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:09.661 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:09.661 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:09.919 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:10.553 20:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:11.501 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:11.501 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:11.501 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.501 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:11.501 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.501 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:11.501 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.501 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:11.760 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.760 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:11.760 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.760 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:12.019 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.019 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:12.019 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.019 20:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:12.586 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.586 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:12.586 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.586 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:12.586 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.586 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:12.586 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.586 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:12.854 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.855 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:12.855 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:13.117 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:13.376 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:14.753 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:14.753 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:14.753 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.753 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:14.753 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:14.753 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:14.753 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.753 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:15.011 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.011 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:15.011 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.011 20:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:15.269 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.269 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:15.269 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.269 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:15.528 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.528 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:15.528 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.528 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:15.786 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.786 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:15.786 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.786 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:16.352 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.352 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:16.352 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:16.352 20:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:16.612 20:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:17.989 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:17.989 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:17.989 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.989 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:17.989 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.989 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:17.989 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.989 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:18.247 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.247 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:18.247 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.247 20:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:18.504 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.504 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:18.504 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.504 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:18.763 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.763 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:18.763 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.763 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:19.022 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.022 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:19.022 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.022 20:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:19.280 20:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.280 20:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:19.280 20:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:19.849 20:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:19.849 20:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:21.227 20:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:21.227 20:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:21.227 20:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.227 20:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:21.227 20:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.227 20:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:21.227 20:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.227 20:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:21.486 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:21.486 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:21.486 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.486 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:21.744 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.744 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:21.744 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.744 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:22.002 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.002 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:22.002 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.002 20:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:22.260 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.260 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:22.260 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.260 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:22.519 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:22.519 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3122762 00:34:22.519 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3122762 ']' 00:34:22.519 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3122762 00:34:22.519 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:22.519 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:22.519 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3122762 00:34:22.777 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:34:22.777 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:34:22.777 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3122762' 00:34:22.777 killing process with pid 3122762 00:34:22.777 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3122762 00:34:22.777 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3122762 00:34:22.777 { 00:34:22.777 "results": [ 00:34:22.777 { 00:34:22.777 "job": "Nvme0n1", 00:34:22.777 "core_mask": "0x4", 00:34:22.777 "workload": "verify", 00:34:22.777 "status": "terminated", 00:34:22.777 "verify_range": { 00:34:22.777 "start": 0, 00:34:22.777 "length": 16384 00:34:22.777 }, 00:34:22.777 "queue_depth": 128, 00:34:22.777 "io_size": 4096, 00:34:22.777 "runtime": 34.406115, 00:34:22.777 "iops": 5925.109533581458, 00:34:22.777 "mibps": 23.14495911555257, 00:34:22.777 "io_failed": 0, 00:34:22.777 "io_timeout": 0, 00:34:22.777 "avg_latency_us": 21567.493960691976, 00:34:22.777 "min_latency_us": 1055.8577777777778, 00:34:22.777 "max_latency_us": 4101097.2444444443 00:34:22.777 } 00:34:22.777 ], 00:34:22.777 "core_count": 1 00:34:22.777 } 00:34:23.357 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3122762 00:34:23.357 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:23.357 [2024-10-13 20:03:35.466091] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:34:23.357 [2024-10-13 20:03:35.466231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122762 ] 00:34:23.357 [2024-10-13 20:03:35.594834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.357 [2024-10-13 20:03:35.718797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:23.357 Running I/O for 90 seconds... 00:34:23.357 6080.00 IOPS, 23.75 MiB/s [2024-10-13T18:04:13.172Z] 6189.50 IOPS, 24.18 MiB/s [2024-10-13T18:04:13.172Z] 6266.00 IOPS, 24.48 MiB/s [2024-10-13T18:04:13.172Z] 6264.75 IOPS, 24.47 MiB/s [2024-10-13T18:04:13.172Z] 6278.40 IOPS, 24.52 MiB/s [2024-10-13T18:04:13.172Z] 6279.50 IOPS, 24.53 MiB/s [2024-10-13T18:04:13.172Z] 6284.43 IOPS, 24.55 MiB/s [2024-10-13T18:04:13.172Z] 6303.88 IOPS, 24.62 MiB/s [2024-10-13T18:04:13.172Z] 6280.44 IOPS, 24.53 MiB/s [2024-10-13T18:04:13.172Z] 6293.20 IOPS, 24.58 MiB/s [2024-10-13T18:04:13.172Z] 6286.09 IOPS, 24.56 MiB/s [2024-10-13T18:04:13.172Z] 6283.42 IOPS, 24.54 MiB/s [2024-10-13T18:04:13.173Z] 6273.54 IOPS, 24.51 MiB/s [2024-10-13T18:04:13.173Z] 6271.43 IOPS, 24.50 MiB/s [2024-10-13T18:04:13.173Z] 6267.80 IOPS, 24.48 MiB/s [2024-10-13T18:04:13.173Z] [2024-10-13 20:03:52.980509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.980612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.980674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.980703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.980758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.980784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.980821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.980846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.980883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.980909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.980945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.980970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.981591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.981617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.982964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.982999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.983954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.983980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.984016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.984040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:23.358 [2024-10-13 20:03:52.984076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.358 [2024-10-13 20:03:52.984100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.984962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.984987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.985952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.985991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.986565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.986590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.987333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.987366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:23.359 [2024-10-13 20:03:52.987416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.359 [2024-10-13 20:03:52.987444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.987507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.987574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.987638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.987699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.987760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.987823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.987886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.987948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.987984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.988013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.988075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.988136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.988198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.988262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.988330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.988959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.988995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.360 [2024-10-13 20:03:52.989340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.989943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.989968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:23.360 [2024-10-13 20:03:52.990005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.360 [2024-10-13 20:03:52.990030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.361 [2024-10-13 20:03:52.990092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.361 [2024-10-13 20:03:52.990152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.361 [2024-10-13 20:03:52.990214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.361 [2024-10-13 20:03:52.990275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.361 [2024-10-13 20:03:52.990336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.990961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.990987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.991963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.991989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.992024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.992051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.992087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.992112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.992149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.992178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.992215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.992242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.992278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.992304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.992341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.992366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.993520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.993554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.993598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.993625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.993661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.361 [2024-10-13 20:03:52.993687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:23.361 [2024-10-13 20:03:52.993722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.993748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.993784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.993811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.993847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.993872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.993908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.993934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.993971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.993998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.994956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.994992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.362 [2024-10-13 20:03:52.995855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:23.362 [2024-10-13 20:03:52.995890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.995917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.995953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.995978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.996014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.996039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.996075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.996102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.996138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.996163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.996198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.996224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.996260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.996285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.996321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.996347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.996384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.996432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.996473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.996499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.997962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.997990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.998051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.998111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.998171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.998233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.363 [2024-10-13 20:03:52.998295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.998970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.998996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.999031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.999055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.999091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.999117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.999151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.999176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:23.363 [2024-10-13 20:03:52.999211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.363 [2024-10-13 20:03:52.999238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:52.999299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:52.999965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:52.999990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:53.000052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:53.000114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:53.000174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:53.000245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.364 [2024-10-13 20:03:53.000311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.000958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.000994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:23.364 [2024-10-13 20:03:53.001830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.364 [2024-10-13 20:03:53.001855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.001891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.001917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.001953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.001978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.002014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.002040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.002075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.002102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.002139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.002165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.002217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.002245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.002280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.002306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.003514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.003547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.003591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.003619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.003656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.003682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.003720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.003746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.003787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.003814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.003849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.003874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.003927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.003952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.003988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.004952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.004978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.365 [2024-10-13 20:03:53.005703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:23.365 [2024-10-13 20:03:53.005742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.005767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.005817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.005842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.005876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.005901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.005936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.005961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.005996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.006058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.006118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.006185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.006246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.006304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.006363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.006456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.006517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.006543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.007956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.007980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.008039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.008108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.008168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.008227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.008285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.008345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.008429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.366 [2024-10-13 20:03:53.008492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.366 [2024-10-13 20:03:53.008560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.366 [2024-10-13 20:03:53.008621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.366 [2024-10-13 20:03:53.008683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.366 [2024-10-13 20:03:53.008759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.366 [2024-10-13 20:03:53.008819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:23.366 [2024-10-13 20:03:53.008854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.008913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.008938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.008974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.008999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.009519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.009955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.009991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.367 [2024-10-13 20:03:53.010545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.010621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.010688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.010765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.010824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.010888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.010949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.010984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.011011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.011047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.011072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.011107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.011134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.011169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.011193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.011228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.011253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.011286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.011311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.011347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.011372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.011434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.367 [2024-10-13 20:03:53.011461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:23.367 [2024-10-13 20:03:53.011497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.011522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.011557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.011582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.011618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.011645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.011689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.011731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.011767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.011791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.011826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.011850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.011885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.011909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.011943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.011969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.012004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.012029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.012063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.012088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.012121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.012147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.012181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.012206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.012240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.012264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.012299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.012323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.012360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.012409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.012459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.012485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.013605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.013639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.013682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.013708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.013745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.013780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.013816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.013850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.013886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.013911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.013963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.013987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.014952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.014988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.015012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.015048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.015073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.015109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.015139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.015176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.015201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:23.368 [2024-10-13 20:03:53.015237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.368 [2024-10-13 20:03:53.015262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.015945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.015976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.016656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.016699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.017501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.017535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.017584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.017613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.017650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.017676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.017713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.017739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.017774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.017800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.017836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.017862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.017898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.017923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.017959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.017984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.369 [2024-10-13 20:03:53.018695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:23.369 [2024-10-13 20:03:53.018745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.369 [2024-10-13 20:03:53.018771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.018806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.018831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.018866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.018891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.018925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.018958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.370 [2024-10-13 20:03:53.019718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.019958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.019983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.370 [2024-10-13 20:03:53.020782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.370 [2024-10-13 20:03:53.020864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.370 [2024-10-13 20:03:53.020928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.020963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.370 [2024-10-13 20:03:53.020988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.021022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.370 [2024-10-13 20:03:53.021046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.021090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.370 [2024-10-13 20:03:53.021114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:23.370 [2024-10-13 20:03:53.021148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.021963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.021988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.022665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.022693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.023799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.023833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.023893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.023924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.023963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.023988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.371 [2024-10-13 20:03:53.024904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:23.371 [2024-10-13 20:03:53.024939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.024965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.025928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.025958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.026875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.026910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.027742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.027786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.027828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.027855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.027891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.027916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.027952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.027979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.028015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.028050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.028085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.028110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.028146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.028171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.028206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.028232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:23.372 [2024-10-13 20:03:53.028273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.372 [2024-10-13 20:03:53.028300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.028958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.028993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.029017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.029952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.029979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.373 [2024-10-13 20:03:53.030060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:23.373 [2024-10-13 20:03:53.030950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.373 [2024-10-13 20:03:53.030977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.374 [2024-10-13 20:03:53.031046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.374 [2024-10-13 20:03:53.031106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.031961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.031987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.032946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.032970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.034123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.034155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.034196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.034228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.034266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.374 [2024-10-13 20:03:53.034292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:23.374 [2024-10-13 20:03:53.034328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.034946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.034982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.035945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.035981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.036933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.036958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.037003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.375 [2024-10-13 20:03:53.037038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:23.375 [2024-10-13 20:03:53.037071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.037132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.037210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.037574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.037694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.037763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.037830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.037902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.037968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.037999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.038946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.038984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.039009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.376 [2024-10-13 20:03:53.039078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.039946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.039982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.040007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.040045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.040070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.040123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.376 [2024-10-13 20:03:53.040149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:23.376 [2024-10-13 20:03:53.040196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.040222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.040943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.040981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.041006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.041069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.041149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.041214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.041279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.377 [2024-10-13 20:03:53.041349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.041443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.041510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.041576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.041640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.041713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.041801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.041881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.041946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.041985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.042959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.042984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:23.377 [2024-10-13 20:03:53.043021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.377 [2024-10-13 20:03:53.043050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:03:53.043105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:03:53.043132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:03:53.043172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:03:53.043198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:03:53.043237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:03:53.043263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:03:53.043517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:03:53.043552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:23.378 5903.38 IOPS, 23.06 MiB/s [2024-10-13T18:04:13.193Z] 5556.12 IOPS, 21.70 MiB/s [2024-10-13T18:04:13.193Z] 5247.44 IOPS, 20.50 MiB/s [2024-10-13T18:04:13.193Z] 4971.26 IOPS, 19.42 MiB/s [2024-10-13T18:04:13.193Z] 4991.40 IOPS, 19.50 MiB/s [2024-10-13T18:04:13.193Z] 5054.62 IOPS, 19.74 MiB/s [2024-10-13T18:04:13.193Z] 5124.82 IOPS, 20.02 MiB/s [2024-10-13T18:04:13.193Z] 5271.78 IOPS, 20.59 MiB/s [2024-10-13T18:04:13.193Z] 5400.79 IOPS, 21.10 MiB/s [2024-10-13T18:04:13.193Z] 5530.56 IOPS, 21.60 MiB/s [2024-10-13T18:04:13.193Z] 5563.69 IOPS, 21.73 MiB/s [2024-10-13T18:04:13.193Z] 5591.04 IOPS, 21.84 MiB/s [2024-10-13T18:04:13.193Z] 5616.79 IOPS, 21.94 MiB/s [2024-10-13T18:04:13.193Z] 5675.07 IOPS, 22.17 MiB/s [2024-10-13T18:04:13.193Z] 5768.70 IOPS, 22.53 MiB/s [2024-10-13T18:04:13.193Z] 5851.13 IOPS, 22.86 MiB/s [2024-10-13T18:04:13.193Z] [2024-10-13 20:04:09.621014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.621131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.621943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.621968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.622004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.622028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.622066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.622091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.625808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.625850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.625897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.625924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.625963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.625990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.626692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.626769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.626830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.626890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.626956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.626994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.378 [2024-10-13 20:04:09.627019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.627055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.627079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:23.378 [2024-10-13 20:04:09.627116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.378 [2024-10-13 20:04:09.627142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.379 [2024-10-13 20:04:09.627202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.379 [2024-10-13 20:04:09.627262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.379 [2024-10-13 20:04:09.627322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.379 [2024-10-13 20:04:09.627408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.379 [2024-10-13 20:04:09.627476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.379 [2024-10-13 20:04:09.627540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.379 [2024-10-13 20:04:09.627601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.379 [2024-10-13 20:04:09.627664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.379 [2024-10-13 20:04:09.627742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:23.379 [2024-10-13 20:04:09.627810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:23.379 [2024-10-13 20:04:09.627863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:23.379 [2024-10-13 20:04:09.627889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:23.379 5911.16 IOPS, 23.09 MiB/s [2024-10-13T18:04:13.194Z] 5911.82 IOPS, 23.09 MiB/s [2024-10-13T18:04:13.194Z] 5923.56 IOPS, 23.14 MiB/s [2024-10-13T18:04:13.194Z] Received shutdown signal, test time was about 34.406965 seconds 00:34:23.379 00:34:23.379 Latency(us) 00:34:23.379 [2024-10-13T18:04:13.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.379 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:23.379 Verification LBA range: start 0x0 length 0x4000 00:34:23.379 Nvme0n1 : 34.41 5925.11 23.14 0.00 0.00 21567.49 1055.86 4101097.24 00:34:23.379 [2024-10-13T18:04:13.194Z] =================================================================================================================== 00:34:23.379 [2024-10-13T18:04:13.194Z] Total : 5925.11 23.14 0.00 0.00 21567.49 1055.86 4101097.24 00:34:23.379 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.638 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.638 rmmod nvme_tcp 00:34:23.638 rmmod nvme_fabrics 00:34:23.898 rmmod nvme_keyring 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3122344 ']' 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3122344 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3122344 ']' 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3122344 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3122344 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3122344' 00:34:23.898 killing process with pid 3122344 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3122344 00:34:23.898 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3122344 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.277 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.183 20:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.183 00:34:27.183 real 0m46.886s 00:34:27.183 user 2m21.430s 00:34:27.183 sys 0m10.443s 00:34:27.183 20:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:27.183 20:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:27.183 ************************************ 00:34:27.183 END TEST nvmf_host_multipath_status 00:34:27.183 ************************************ 00:34:27.183 20:04:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:27.183 20:04:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:27.183 20:04:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:27.183 20:04:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.183 ************************************ 00:34:27.183 START TEST nvmf_discovery_remove_ifc 00:34:27.183 ************************************ 00:34:27.184 20:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:27.184 * Looking for test storage... 00:34:27.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:27.184 20:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:27.184 20:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:34:27.184 20:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:27.442 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.443 --rc genhtml_branch_coverage=1 00:34:27.443 --rc genhtml_function_coverage=1 00:34:27.443 --rc genhtml_legend=1 00:34:27.443 --rc geninfo_all_blocks=1 00:34:27.443 --rc geninfo_unexecuted_blocks=1 00:34:27.443 00:34:27.443 ' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.443 --rc genhtml_branch_coverage=1 00:34:27.443 --rc genhtml_function_coverage=1 00:34:27.443 --rc genhtml_legend=1 00:34:27.443 --rc geninfo_all_blocks=1 00:34:27.443 --rc geninfo_unexecuted_blocks=1 00:34:27.443 00:34:27.443 ' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.443 --rc genhtml_branch_coverage=1 00:34:27.443 --rc genhtml_function_coverage=1 00:34:27.443 --rc genhtml_legend=1 00:34:27.443 --rc geninfo_all_blocks=1 00:34:27.443 --rc geninfo_unexecuted_blocks=1 00:34:27.443 00:34:27.443 ' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.443 --rc genhtml_branch_coverage=1 00:34:27.443 --rc genhtml_function_coverage=1 00:34:27.443 --rc genhtml_legend=1 00:34:27.443 --rc geninfo_all_blocks=1 00:34:27.443 --rc geninfo_unexecuted_blocks=1 00:34:27.443 00:34:27.443 ' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:27.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:27.443 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.444 20:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:29.356 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:29.356 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:29.356 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:29.356 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:29.356 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:29.357 20:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:29.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:29.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:34:29.357 00:34:29.357 --- 10.0.0.2 ping statistics --- 00:34:29.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.357 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:29.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:29.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:34:29.357 00:34:29.357 --- 10.0.0.1 ping statistics --- 00:34:29.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.357 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=3129358 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 3129358 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3129358 ']' 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:29.357 20:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:29.617 [2024-10-13 20:04:19.214665] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:34:29.617 [2024-10-13 20:04:19.214846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.617 [2024-10-13 20:04:19.352359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.879 [2024-10-13 20:04:19.471262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.879 [2024-10-13 20:04:19.471334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.879 [2024-10-13 20:04:19.471355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.879 [2024-10-13 20:04:19.471389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.879 [2024-10-13 20:04:19.471415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.879 [2024-10-13 20:04:19.472811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.447 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:30.447 [2024-10-13 20:04:20.247334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.447 [2024-10-13 20:04:20.255645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:30.706 null0 00:34:30.706 [2024-10-13 20:04:20.287512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3129523 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3129523 /tmp/host.sock 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3129523 ']' 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:30.706 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:30.706 20:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:30.706 [2024-10-13 20:04:20.399347] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:34:30.706 [2024-10-13 20:04:20.399527] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129523 ] 00:34:30.967 [2024-10-13 20:04:20.535236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.967 [2024-10-13 20:04:20.671118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.903 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:32.161 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.161 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:32.161 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.161 20:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:33.095 [2024-10-13 20:04:22.841295] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:33.095 [2024-10-13 20:04:22.841345] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:33.095 [2024-10-13 20:04:22.841392] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:33.355 [2024-10-13 20:04:22.968888] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:33.614 [2024-10-13 20:04:23.193526] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:33.614 [2024-10-13 20:04:23.193617] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:33.614 [2024-10-13 20:04:23.193696] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:33.614 [2024-10-13 20:04:23.193750] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:33.614 [2024-10-13 20:04:23.193802] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:33.614 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:33.615 [2024-10-13 20:04:23.239822] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:33.615 20:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:34.550 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:34.550 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:34.550 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.550 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:34.550 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:34.550 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:34.550 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:34.550 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.810 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:34.810 20:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:35.746 20:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:36.685 20:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:38.068 20:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:39.008 20:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:39.008 [2024-10-13 20:04:28.634458] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:39.008 [2024-10-13 20:04:28.634564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.008 [2024-10-13 20:04:28.634598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.008 [2024-10-13 20:04:28.634644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.008 [2024-10-13 20:04:28.634668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.008 [2024-10-13 20:04:28.634690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.008 [2024-10-13 20:04:28.634726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.008 [2024-10-13 20:04:28.634751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.008 [2024-10-13 20:04:28.634776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.008 [2024-10-13 20:04:28.634801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.008 [2024-10-13 20:04:28.634824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.008 [2024-10-13 20:04:28.634847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:39.008 [2024-10-13 20:04:28.644455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:39.008 [2024-10-13 20:04:28.654513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:39.952 [2024-10-13 20:04:29.716466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:39.952 [2024-10-13 20:04:29.716561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:34:39.952 [2024-10-13 20:04:29.716600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:39.952 [2024-10-13 20:04:29.716680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:39.952 [2024-10-13 20:04:29.717491] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:39.952 [2024-10-13 20:04:29.717557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:39.952 [2024-10-13 20:04:29.717583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:39.952 [2024-10-13 20:04:29.717608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:39.952 [2024-10-13 20:04:29.717665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:39.952 [2024-10-13 20:04:29.717711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:39.952 20:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:41.331 [2024-10-13 20:04:30.720270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:41.331 [2024-10-13 20:04:30.720347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:41.331 [2024-10-13 20:04:30.720373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:41.331 [2024-10-13 20:04:30.720405] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:41.331 [2024-10-13 20:04:30.720487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.331 [2024-10-13 20:04:30.720553] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:41.331 [2024-10-13 20:04:30.720623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.331 [2024-10-13 20:04:30.720655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.331 [2024-10-13 20:04:30.720701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.331 [2024-10-13 20:04:30.720726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.331 [2024-10-13 20:04:30.720750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.331 [2024-10-13 20:04:30.720784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.331 [2024-10-13 20:04:30.720809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.331 [2024-10-13 20:04:30.720831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.331 [2024-10-13 20:04:30.720854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.331 [2024-10-13 20:04:30.720876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.331 [2024-10-13 20:04:30.720898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:41.331 [2024-10-13 20:04:30.720990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:41.331 [2024-10-13 20:04:30.721979] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:41.331 [2024-10-13 20:04:30.722015] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:41.331 20:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:42.332 20:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:43.272 [2024-10-13 20:04:32.777221] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:43.272 [2024-10-13 20:04:32.777273] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:43.272 [2024-10-13 20:04:32.777325] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:43.272 [2024-10-13 20:04:32.904834] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:43.272 20:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:43.272 [2024-10-13 20:04:32.968802] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:43.272 [2024-10-13 20:04:32.968889] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:43.272 [2024-10-13 20:04:32.968972] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:43.272 [2024-10-13 20:04:32.969013] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:43.272 [2024-10-13 20:04:32.969042] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:43.272 [2024-10-13 20:04:32.975658] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3129523 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3129523 ']' 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3129523 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:44.211 20:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3129523 00:34:44.471 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:44.471 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:44.471 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3129523' 00:34:44.471 killing process with pid 3129523 00:34:44.471 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3129523 00:34:44.471 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3129523 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:45.409 rmmod nvme_tcp 00:34:45.409 rmmod nvme_fabrics 00:34:45.409 rmmod nvme_keyring 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 3129358 ']' 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 3129358 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3129358 ']' 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3129358 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:45.409 20:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3129358 00:34:45.409 20:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:45.409 20:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:45.409 20:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3129358' 00:34:45.409 killing process with pid 3129358 00:34:45.409 20:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3129358 00:34:45.409 20:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3129358 00:34:46.344 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:46.344 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:46.344 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:46.344 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:46.344 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:34:46.344 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:46.344 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:34:46.603 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:46.603 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:46.603 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.603 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.603 20:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:48.509 00:34:48.509 real 0m21.310s 00:34:48.509 user 0m31.556s 00:34:48.509 sys 0m3.230s 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.509 ************************************ 00:34:48.509 END TEST nvmf_discovery_remove_ifc 00:34:48.509 ************************************ 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.509 ************************************ 00:34:48.509 START TEST nvmf_identify_kernel_target 00:34:48.509 ************************************ 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:48.509 * Looking for test storage... 00:34:48.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:48.509 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:48.768 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:48.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.769 --rc genhtml_branch_coverage=1 00:34:48.769 --rc genhtml_function_coverage=1 00:34:48.769 --rc genhtml_legend=1 00:34:48.769 --rc geninfo_all_blocks=1 00:34:48.769 --rc geninfo_unexecuted_blocks=1 00:34:48.769 00:34:48.769 ' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:48.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.769 --rc genhtml_branch_coverage=1 00:34:48.769 --rc genhtml_function_coverage=1 00:34:48.769 --rc genhtml_legend=1 00:34:48.769 --rc geninfo_all_blocks=1 00:34:48.769 --rc geninfo_unexecuted_blocks=1 00:34:48.769 00:34:48.769 ' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:48.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.769 --rc genhtml_branch_coverage=1 00:34:48.769 --rc genhtml_function_coverage=1 00:34:48.769 --rc genhtml_legend=1 00:34:48.769 --rc geninfo_all_blocks=1 00:34:48.769 --rc geninfo_unexecuted_blocks=1 00:34:48.769 00:34:48.769 ' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:48.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.769 --rc genhtml_branch_coverage=1 00:34:48.769 --rc genhtml_function_coverage=1 00:34:48.769 --rc genhtml_legend=1 00:34:48.769 --rc geninfo_all_blocks=1 00:34:48.769 --rc geninfo_unexecuted_blocks=1 00:34:48.769 00:34:48.769 ' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:48.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:48.769 20:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:50.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:50.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:50.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.682 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:50.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.683 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:50.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:34:50.942 00:34:50.942 --- 10.0.0.2 ping statistics --- 00:34:50.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.942 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:34:50.942 00:34:50.942 --- 10.0.0.1 ping statistics --- 00:34:50.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.942 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:50.942 20:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:51.877 Waiting for block devices as requested 00:34:52.136 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:52.136 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:52.136 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:52.394 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:52.394 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:52.394 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:52.394 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:52.653 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:52.653 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:52.653 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:52.653 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:52.911 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:52.911 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:52.911 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:52.911 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:53.169 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:53.169 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:53.169 No valid GPT data, bailing 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:53.169 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:34:53.428 20:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:53.428 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:53.428 00:34:53.428 Discovery Log Number of Records 2, Generation counter 2 00:34:53.428 =====Discovery Log Entry 0====== 00:34:53.428 trtype: tcp 00:34:53.428 adrfam: ipv4 00:34:53.428 subtype: current discovery subsystem 00:34:53.428 treq: not specified, sq flow control disable supported 00:34:53.428 portid: 1 00:34:53.428 trsvcid: 4420 00:34:53.428 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:53.428 traddr: 10.0.0.1 00:34:53.428 eflags: none 00:34:53.428 sectype: none 00:34:53.428 =====Discovery Log Entry 1====== 00:34:53.428 trtype: tcp 00:34:53.428 adrfam: ipv4 00:34:53.428 subtype: nvme subsystem 00:34:53.428 treq: not specified, sq flow control disable supported 00:34:53.428 portid: 1 00:34:53.428 trsvcid: 4420 00:34:53.428 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:53.428 traddr: 10.0.0.1 00:34:53.428 eflags: none 00:34:53.428 sectype: none 00:34:53.428 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:53.428 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:53.428 ===================================================== 00:34:53.428 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:53.428 ===================================================== 00:34:53.428 Controller Capabilities/Features 00:34:53.428 ================================ 00:34:53.428 Vendor ID: 0000 00:34:53.428 Subsystem Vendor ID: 0000 00:34:53.428 Serial Number: 3b3d1d00f1ef7113da17 00:34:53.428 Model Number: Linux 00:34:53.428 Firmware Version: 6.8.9-20 00:34:53.428 Recommended Arb Burst: 0 00:34:53.428 IEEE OUI Identifier: 00 00 00 00:34:53.428 Multi-path I/O 00:34:53.428 May have multiple subsystem ports: No 00:34:53.428 May have multiple controllers: No 00:34:53.428 Associated with SR-IOV VF: No 00:34:53.428 Max Data Transfer Size: Unlimited 00:34:53.428 Max Number of Namespaces: 0 00:34:53.428 Max Number of I/O Queues: 1024 00:34:53.428 NVMe Specification Version (VS): 1.3 00:34:53.428 NVMe Specification Version (Identify): 1.3 00:34:53.428 Maximum Queue Entries: 1024 00:34:53.428 Contiguous Queues Required: No 00:34:53.428 Arbitration Mechanisms Supported 00:34:53.428 Weighted Round Robin: Not Supported 00:34:53.428 Vendor Specific: Not Supported 00:34:53.428 Reset Timeout: 7500 ms 00:34:53.428 Doorbell Stride: 4 bytes 00:34:53.428 NVM Subsystem Reset: Not Supported 00:34:53.428 Command Sets Supported 00:34:53.428 NVM Command Set: Supported 00:34:53.428 Boot Partition: Not Supported 00:34:53.428 Memory Page Size Minimum: 4096 bytes 00:34:53.428 Memory Page Size Maximum: 4096 bytes 00:34:53.428 Persistent Memory Region: Not Supported 00:34:53.428 Optional Asynchronous Events Supported 00:34:53.428 Namespace Attribute Notices: Not Supported 00:34:53.428 Firmware Activation Notices: Not Supported 00:34:53.428 ANA Change Notices: Not Supported 00:34:53.428 PLE Aggregate Log Change Notices: Not Supported 00:34:53.428 LBA Status Info Alert Notices: Not Supported 00:34:53.428 EGE Aggregate Log Change Notices: Not Supported 00:34:53.428 Normal NVM Subsystem Shutdown event: Not Supported 00:34:53.428 Zone Descriptor Change Notices: Not Supported 00:34:53.428 Discovery Log Change Notices: Supported 00:34:53.428 Controller Attributes 00:34:53.428 128-bit Host Identifier: Not Supported 00:34:53.428 Non-Operational Permissive Mode: Not Supported 00:34:53.428 NVM Sets: Not Supported 00:34:53.428 Read Recovery Levels: Not Supported 00:34:53.428 Endurance Groups: Not Supported 00:34:53.428 Predictable Latency Mode: Not Supported 00:34:53.428 Traffic Based Keep ALive: Not Supported 00:34:53.428 Namespace Granularity: Not Supported 00:34:53.428 SQ Associations: Not Supported 00:34:53.428 UUID List: Not Supported 00:34:53.428 Multi-Domain Subsystem: Not Supported 00:34:53.428 Fixed Capacity Management: Not Supported 00:34:53.428 Variable Capacity Management: Not Supported 00:34:53.428 Delete Endurance Group: Not Supported 00:34:53.428 Delete NVM Set: Not Supported 00:34:53.428 Extended LBA Formats Supported: Not Supported 00:34:53.428 Flexible Data Placement Supported: Not Supported 00:34:53.428 00:34:53.428 Controller Memory Buffer Support 00:34:53.428 ================================ 00:34:53.428 Supported: No 00:34:53.428 00:34:53.428 Persistent Memory Region Support 00:34:53.428 ================================ 00:34:53.428 Supported: No 00:34:53.428 00:34:53.428 Admin Command Set Attributes 00:34:53.428 ============================ 00:34:53.428 Security Send/Receive: Not Supported 00:34:53.428 Format NVM: Not Supported 00:34:53.428 Firmware Activate/Download: Not Supported 00:34:53.428 Namespace Management: Not Supported 00:34:53.428 Device Self-Test: Not Supported 00:34:53.428 Directives: Not Supported 00:34:53.428 NVMe-MI: Not Supported 00:34:53.428 Virtualization Management: Not Supported 00:34:53.428 Doorbell Buffer Config: Not Supported 00:34:53.428 Get LBA Status Capability: Not Supported 00:34:53.428 Command & Feature Lockdown Capability: Not Supported 00:34:53.428 Abort Command Limit: 1 00:34:53.428 Async Event Request Limit: 1 00:34:53.428 Number of Firmware Slots: N/A 00:34:53.428 Firmware Slot 1 Read-Only: N/A 00:34:53.688 Firmware Activation Without Reset: N/A 00:34:53.688 Multiple Update Detection Support: N/A 00:34:53.688 Firmware Update Granularity: No Information Provided 00:34:53.688 Per-Namespace SMART Log: No 00:34:53.688 Asymmetric Namespace Access Log Page: Not Supported 00:34:53.688 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:53.688 Command Effects Log Page: Not Supported 00:34:53.688 Get Log Page Extended Data: Supported 00:34:53.688 Telemetry Log Pages: Not Supported 00:34:53.688 Persistent Event Log Pages: Not Supported 00:34:53.688 Supported Log Pages Log Page: May Support 00:34:53.688 Commands Supported & Effects Log Page: Not Supported 00:34:53.688 Feature Identifiers & Effects Log Page:May Support 00:34:53.688 NVMe-MI Commands & Effects Log Page: May Support 00:34:53.688 Data Area 4 for Telemetry Log: Not Supported 00:34:53.688 Error Log Page Entries Supported: 1 00:34:53.688 Keep Alive: Not Supported 00:34:53.688 00:34:53.688 NVM Command Set Attributes 00:34:53.688 ========================== 00:34:53.688 Submission Queue Entry Size 00:34:53.688 Max: 1 00:34:53.688 Min: 1 00:34:53.688 Completion Queue Entry Size 00:34:53.688 Max: 1 00:34:53.688 Min: 1 00:34:53.688 Number of Namespaces: 0 00:34:53.688 Compare Command: Not Supported 00:34:53.688 Write Uncorrectable Command: Not Supported 00:34:53.688 Dataset Management Command: Not Supported 00:34:53.688 Write Zeroes Command: Not Supported 00:34:53.688 Set Features Save Field: Not Supported 00:34:53.688 Reservations: Not Supported 00:34:53.688 Timestamp: Not Supported 00:34:53.688 Copy: Not Supported 00:34:53.688 Volatile Write Cache: Not Present 00:34:53.688 Atomic Write Unit (Normal): 1 00:34:53.688 Atomic Write Unit (PFail): 1 00:34:53.688 Atomic Compare & Write Unit: 1 00:34:53.688 Fused Compare & Write: Not Supported 00:34:53.688 Scatter-Gather List 00:34:53.688 SGL Command Set: Supported 00:34:53.688 SGL Keyed: Not Supported 00:34:53.688 SGL Bit Bucket Descriptor: Not Supported 00:34:53.688 SGL Metadata Pointer: Not Supported 00:34:53.688 Oversized SGL: Not Supported 00:34:53.688 SGL Metadata Address: Not Supported 00:34:53.688 SGL Offset: Supported 00:34:53.688 Transport SGL Data Block: Not Supported 00:34:53.688 Replay Protected Memory Block: Not Supported 00:34:53.688 00:34:53.688 Firmware Slot Information 00:34:53.688 ========================= 00:34:53.688 Active slot: 0 00:34:53.688 00:34:53.688 00:34:53.688 Error Log 00:34:53.688 ========= 00:34:53.688 00:34:53.688 Active Namespaces 00:34:53.688 ================= 00:34:53.688 Discovery Log Page 00:34:53.688 ================== 00:34:53.688 Generation Counter: 2 00:34:53.688 Number of Records: 2 00:34:53.688 Record Format: 0 00:34:53.688 00:34:53.688 Discovery Log Entry 0 00:34:53.688 ---------------------- 00:34:53.688 Transport Type: 3 (TCP) 00:34:53.688 Address Family: 1 (IPv4) 00:34:53.688 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:53.688 Entry Flags: 00:34:53.688 Duplicate Returned Information: 0 00:34:53.688 Explicit Persistent Connection Support for Discovery: 0 00:34:53.688 Transport Requirements: 00:34:53.688 Secure Channel: Not Specified 00:34:53.688 Port ID: 1 (0x0001) 00:34:53.688 Controller ID: 65535 (0xffff) 00:34:53.688 Admin Max SQ Size: 32 00:34:53.688 Transport Service Identifier: 4420 00:34:53.688 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:53.688 Transport Address: 10.0.0.1 00:34:53.688 Discovery Log Entry 1 00:34:53.688 ---------------------- 00:34:53.688 Transport Type: 3 (TCP) 00:34:53.688 Address Family: 1 (IPv4) 00:34:53.688 Subsystem Type: 2 (NVM Subsystem) 00:34:53.688 Entry Flags: 00:34:53.688 Duplicate Returned Information: 0 00:34:53.688 Explicit Persistent Connection Support for Discovery: 0 00:34:53.688 Transport Requirements: 00:34:53.688 Secure Channel: Not Specified 00:34:53.688 Port ID: 1 (0x0001) 00:34:53.688 Controller ID: 65535 (0xffff) 00:34:53.688 Admin Max SQ Size: 32 00:34:53.689 Transport Service Identifier: 4420 00:34:53.689 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:53.689 Transport Address: 10.0.0.1 00:34:53.689 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:53.689 get_feature(0x01) failed 00:34:53.689 get_feature(0x02) failed 00:34:53.689 get_feature(0x04) failed 00:34:53.689 ===================================================== 00:34:53.689 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:53.689 ===================================================== 00:34:53.689 Controller Capabilities/Features 00:34:53.689 ================================ 00:34:53.689 Vendor ID: 0000 00:34:53.689 Subsystem Vendor ID: 0000 00:34:53.689 Serial Number: bcd90f2ef8f93f8fb02e 00:34:53.689 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:53.689 Firmware Version: 6.8.9-20 00:34:53.689 Recommended Arb Burst: 6 00:34:53.689 IEEE OUI Identifier: 00 00 00 00:34:53.689 Multi-path I/O 00:34:53.689 May have multiple subsystem ports: Yes 00:34:53.689 May have multiple controllers: Yes 00:34:53.689 Associated with SR-IOV VF: No 00:34:53.689 Max Data Transfer Size: Unlimited 00:34:53.689 Max Number of Namespaces: 1024 00:34:53.689 Max Number of I/O Queues: 128 00:34:53.689 NVMe Specification Version (VS): 1.3 00:34:53.689 NVMe Specification Version (Identify): 1.3 00:34:53.689 Maximum Queue Entries: 1024 00:34:53.689 Contiguous Queues Required: No 00:34:53.689 Arbitration Mechanisms Supported 00:34:53.689 Weighted Round Robin: Not Supported 00:34:53.689 Vendor Specific: Not Supported 00:34:53.689 Reset Timeout: 7500 ms 00:34:53.689 Doorbell Stride: 4 bytes 00:34:53.689 NVM Subsystem Reset: Not Supported 00:34:53.689 Command Sets Supported 00:34:53.689 NVM Command Set: Supported 00:34:53.689 Boot Partition: Not Supported 00:34:53.689 Memory Page Size Minimum: 4096 bytes 00:34:53.689 Memory Page Size Maximum: 4096 bytes 00:34:53.689 Persistent Memory Region: Not Supported 00:34:53.689 Optional Asynchronous Events Supported 00:34:53.689 Namespace Attribute Notices: Supported 00:34:53.689 Firmware Activation Notices: Not Supported 00:34:53.689 ANA Change Notices: Supported 00:34:53.689 PLE Aggregate Log Change Notices: Not Supported 00:34:53.689 LBA Status Info Alert Notices: Not Supported 00:34:53.689 EGE Aggregate Log Change Notices: Not Supported 00:34:53.689 Normal NVM Subsystem Shutdown event: Not Supported 00:34:53.689 Zone Descriptor Change Notices: Not Supported 00:34:53.689 Discovery Log Change Notices: Not Supported 00:34:53.689 Controller Attributes 00:34:53.689 128-bit Host Identifier: Supported 00:34:53.689 Non-Operational Permissive Mode: Not Supported 00:34:53.689 NVM Sets: Not Supported 00:34:53.689 Read Recovery Levels: Not Supported 00:34:53.689 Endurance Groups: Not Supported 00:34:53.689 Predictable Latency Mode: Not Supported 00:34:53.689 Traffic Based Keep ALive: Supported 00:34:53.689 Namespace Granularity: Not Supported 00:34:53.689 SQ Associations: Not Supported 00:34:53.689 UUID List: Not Supported 00:34:53.689 Multi-Domain Subsystem: Not Supported 00:34:53.689 Fixed Capacity Management: Not Supported 00:34:53.689 Variable Capacity Management: Not Supported 00:34:53.689 Delete Endurance Group: Not Supported 00:34:53.689 Delete NVM Set: Not Supported 00:34:53.689 Extended LBA Formats Supported: Not Supported 00:34:53.689 Flexible Data Placement Supported: Not Supported 00:34:53.689 00:34:53.689 Controller Memory Buffer Support 00:34:53.689 ================================ 00:34:53.689 Supported: No 00:34:53.689 00:34:53.689 Persistent Memory Region Support 00:34:53.689 ================================ 00:34:53.689 Supported: No 00:34:53.689 00:34:53.689 Admin Command Set Attributes 00:34:53.689 ============================ 00:34:53.689 Security Send/Receive: Not Supported 00:34:53.689 Format NVM: Not Supported 00:34:53.689 Firmware Activate/Download: Not Supported 00:34:53.689 Namespace Management: Not Supported 00:34:53.689 Device Self-Test: Not Supported 00:34:53.689 Directives: Not Supported 00:34:53.689 NVMe-MI: Not Supported 00:34:53.689 Virtualization Management: Not Supported 00:34:53.689 Doorbell Buffer Config: Not Supported 00:34:53.689 Get LBA Status Capability: Not Supported 00:34:53.689 Command & Feature Lockdown Capability: Not Supported 00:34:53.689 Abort Command Limit: 4 00:34:53.689 Async Event Request Limit: 4 00:34:53.689 Number of Firmware Slots: N/A 00:34:53.689 Firmware Slot 1 Read-Only: N/A 00:34:53.689 Firmware Activation Without Reset: N/A 00:34:53.689 Multiple Update Detection Support: N/A 00:34:53.689 Firmware Update Granularity: No Information Provided 00:34:53.689 Per-Namespace SMART Log: Yes 00:34:53.689 Asymmetric Namespace Access Log Page: Supported 00:34:53.689 ANA Transition Time : 10 sec 00:34:53.689 00:34:53.689 Asymmetric Namespace Access Capabilities 00:34:53.689 ANA Optimized State : Supported 00:34:53.689 ANA Non-Optimized State : Supported 00:34:53.689 ANA Inaccessible State : Supported 00:34:53.689 ANA Persistent Loss State : Supported 00:34:53.689 ANA Change State : Supported 00:34:53.689 ANAGRPID is not changed : No 00:34:53.689 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:53.689 00:34:53.689 ANA Group Identifier Maximum : 128 00:34:53.689 Number of ANA Group Identifiers : 128 00:34:53.689 Max Number of Allowed Namespaces : 1024 00:34:53.689 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:53.689 Command Effects Log Page: Supported 00:34:53.689 Get Log Page Extended Data: Supported 00:34:53.689 Telemetry Log Pages: Not Supported 00:34:53.689 Persistent Event Log Pages: Not Supported 00:34:53.689 Supported Log Pages Log Page: May Support 00:34:53.689 Commands Supported & Effects Log Page: Not Supported 00:34:53.689 Feature Identifiers & Effects Log Page:May Support 00:34:53.689 NVMe-MI Commands & Effects Log Page: May Support 00:34:53.689 Data Area 4 for Telemetry Log: Not Supported 00:34:53.689 Error Log Page Entries Supported: 128 00:34:53.689 Keep Alive: Supported 00:34:53.689 Keep Alive Granularity: 1000 ms 00:34:53.690 00:34:53.690 NVM Command Set Attributes 00:34:53.690 ========================== 00:34:53.690 Submission Queue Entry Size 00:34:53.690 Max: 64 00:34:53.690 Min: 64 00:34:53.690 Completion Queue Entry Size 00:34:53.690 Max: 16 00:34:53.690 Min: 16 00:34:53.690 Number of Namespaces: 1024 00:34:53.690 Compare Command: Not Supported 00:34:53.690 Write Uncorrectable Command: Not Supported 00:34:53.690 Dataset Management Command: Supported 00:34:53.690 Write Zeroes Command: Supported 00:34:53.690 Set Features Save Field: Not Supported 00:34:53.690 Reservations: Not Supported 00:34:53.690 Timestamp: Not Supported 00:34:53.690 Copy: Not Supported 00:34:53.690 Volatile Write Cache: Present 00:34:53.690 Atomic Write Unit (Normal): 1 00:34:53.690 Atomic Write Unit (PFail): 1 00:34:53.690 Atomic Compare & Write Unit: 1 00:34:53.690 Fused Compare & Write: Not Supported 00:34:53.690 Scatter-Gather List 00:34:53.690 SGL Command Set: Supported 00:34:53.690 SGL Keyed: Not Supported 00:34:53.690 SGL Bit Bucket Descriptor: Not Supported 00:34:53.690 SGL Metadata Pointer: Not Supported 00:34:53.690 Oversized SGL: Not Supported 00:34:53.690 SGL Metadata Address: Not Supported 00:34:53.690 SGL Offset: Supported 00:34:53.690 Transport SGL Data Block: Not Supported 00:34:53.690 Replay Protected Memory Block: Not Supported 00:34:53.690 00:34:53.690 Firmware Slot Information 00:34:53.690 ========================= 00:34:53.690 Active slot: 0 00:34:53.690 00:34:53.690 Asymmetric Namespace Access 00:34:53.690 =========================== 00:34:53.690 Change Count : 0 00:34:53.690 Number of ANA Group Descriptors : 1 00:34:53.690 ANA Group Descriptor : 0 00:34:53.690 ANA Group ID : 1 00:34:53.690 Number of NSID Values : 1 00:34:53.690 Change Count : 0 00:34:53.690 ANA State : 1 00:34:53.690 Namespace Identifier : 1 00:34:53.690 00:34:53.690 Commands Supported and Effects 00:34:53.690 ============================== 00:34:53.690 Admin Commands 00:34:53.690 -------------- 00:34:53.690 Get Log Page (02h): Supported 00:34:53.690 Identify (06h): Supported 00:34:53.690 Abort (08h): Supported 00:34:53.690 Set Features (09h): Supported 00:34:53.690 Get Features (0Ah): Supported 00:34:53.690 Asynchronous Event Request (0Ch): Supported 00:34:53.690 Keep Alive (18h): Supported 00:34:53.690 I/O Commands 00:34:53.690 ------------ 00:34:53.690 Flush (00h): Supported 00:34:53.690 Write (01h): Supported LBA-Change 00:34:53.690 Read (02h): Supported 00:34:53.690 Write Zeroes (08h): Supported LBA-Change 00:34:53.690 Dataset Management (09h): Supported 00:34:53.690 00:34:53.690 Error Log 00:34:53.690 ========= 00:34:53.690 Entry: 0 00:34:53.690 Error Count: 0x3 00:34:53.690 Submission Queue Id: 0x0 00:34:53.690 Command Id: 0x5 00:34:53.690 Phase Bit: 0 00:34:53.690 Status Code: 0x2 00:34:53.690 Status Code Type: 0x0 00:34:53.690 Do Not Retry: 1 00:34:53.690 Error Location: 0x28 00:34:53.690 LBA: 0x0 00:34:53.690 Namespace: 0x0 00:34:53.690 Vendor Log Page: 0x0 00:34:53.690 ----------- 00:34:53.690 Entry: 1 00:34:53.690 Error Count: 0x2 00:34:53.690 Submission Queue Id: 0x0 00:34:53.690 Command Id: 0x5 00:34:53.690 Phase Bit: 0 00:34:53.690 Status Code: 0x2 00:34:53.690 Status Code Type: 0x0 00:34:53.690 Do Not Retry: 1 00:34:53.690 Error Location: 0x28 00:34:53.690 LBA: 0x0 00:34:53.690 Namespace: 0x0 00:34:53.690 Vendor Log Page: 0x0 00:34:53.690 ----------- 00:34:53.690 Entry: 2 00:34:53.690 Error Count: 0x1 00:34:53.690 Submission Queue Id: 0x0 00:34:53.690 Command Id: 0x4 00:34:53.690 Phase Bit: 0 00:34:53.690 Status Code: 0x2 00:34:53.690 Status Code Type: 0x0 00:34:53.690 Do Not Retry: 1 00:34:53.690 Error Location: 0x28 00:34:53.690 LBA: 0x0 00:34:53.690 Namespace: 0x0 00:34:53.690 Vendor Log Page: 0x0 00:34:53.690 00:34:53.690 Number of Queues 00:34:53.690 ================ 00:34:53.690 Number of I/O Submission Queues: 128 00:34:53.690 Number of I/O Completion Queues: 128 00:34:53.690 00:34:53.690 ZNS Specific Controller Data 00:34:53.690 ============================ 00:34:53.690 Zone Append Size Limit: 0 00:34:53.690 00:34:53.690 00:34:53.690 Active Namespaces 00:34:53.690 ================= 00:34:53.690 get_feature(0x05) failed 00:34:53.690 Namespace ID:1 00:34:53.690 Command Set Identifier: NVM (00h) 00:34:53.690 Deallocate: Supported 00:34:53.690 Deallocated/Unwritten Error: Not Supported 00:34:53.690 Deallocated Read Value: Unknown 00:34:53.690 Deallocate in Write Zeroes: Not Supported 00:34:53.690 Deallocated Guard Field: 0xFFFF 00:34:53.690 Flush: Supported 00:34:53.690 Reservation: Not Supported 00:34:53.690 Namespace Sharing Capabilities: Multiple Controllers 00:34:53.690 Size (in LBAs): 1953525168 (931GiB) 00:34:53.690 Capacity (in LBAs): 1953525168 (931GiB) 00:34:53.690 Utilization (in LBAs): 1953525168 (931GiB) 00:34:53.690 UUID: 6d1bc144-bde4-4774-88c6-7dbdb56f6298 00:34:53.690 Thin Provisioning: Not Supported 00:34:53.690 Per-NS Atomic Units: Yes 00:34:53.690 Atomic Boundary Size (Normal): 0 00:34:53.690 Atomic Boundary Size (PFail): 0 00:34:53.690 Atomic Boundary Offset: 0 00:34:53.690 NGUID/EUI64 Never Reused: No 00:34:53.690 ANA group ID: 1 00:34:53.690 Namespace Write Protected: No 00:34:53.690 Number of LBA Formats: 1 00:34:53.690 Current LBA Format: LBA Format #00 00:34:53.690 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:53.690 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:53.691 rmmod nvme_tcp 00:34:53.691 rmmod nvme_fabrics 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.691 20:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:56.223 20:04:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:57.160 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:57.160 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:57.160 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:57.160 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:57.160 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:57.160 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:57.160 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:57.160 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:57.160 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:57.160 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:57.160 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:57.160 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:57.160 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:57.160 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:57.160 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:57.160 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:58.097 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:58.097 00:34:58.097 real 0m9.607s 00:34:58.097 user 0m2.176s 00:34:58.097 sys 0m3.464s 00:34:58.097 20:04:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:58.097 20:04:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:58.097 ************************************ 00:34:58.098 END TEST nvmf_identify_kernel_target 00:34:58.098 ************************************ 00:34:58.098 20:04:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:58.098 20:04:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:58.098 20:04:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:58.098 20:04:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.098 ************************************ 00:34:58.098 START TEST nvmf_auth_host 00:34:58.098 ************************************ 00:34:58.098 20:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:58.356 * Looking for test storage... 00:34:58.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:58.356 20:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:58.356 20:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:34:58.356 20:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:58.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.356 --rc genhtml_branch_coverage=1 00:34:58.356 --rc genhtml_function_coverage=1 00:34:58.356 --rc genhtml_legend=1 00:34:58.356 --rc geninfo_all_blocks=1 00:34:58.356 --rc geninfo_unexecuted_blocks=1 00:34:58.356 00:34:58.356 ' 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:58.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.356 --rc genhtml_branch_coverage=1 00:34:58.356 --rc genhtml_function_coverage=1 00:34:58.356 --rc genhtml_legend=1 00:34:58.356 --rc geninfo_all_blocks=1 00:34:58.356 --rc geninfo_unexecuted_blocks=1 00:34:58.356 00:34:58.356 ' 00:34:58.356 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:58.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.356 --rc genhtml_branch_coverage=1 00:34:58.356 --rc genhtml_function_coverage=1 00:34:58.356 --rc genhtml_legend=1 00:34:58.356 --rc geninfo_all_blocks=1 00:34:58.356 --rc geninfo_unexecuted_blocks=1 00:34:58.356 00:34:58.356 ' 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:58.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.357 --rc genhtml_branch_coverage=1 00:34:58.357 --rc genhtml_function_coverage=1 00:34:58.357 --rc genhtml_legend=1 00:34:58.357 --rc geninfo_all_blocks=1 00:34:58.357 --rc geninfo_unexecuted_blocks=1 00:34:58.357 00:34:58.357 ' 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:58.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:58.357 20:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.255 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:00.255 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:00.255 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:00.255 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:00.255 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:00.255 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:00.255 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:00.256 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:00.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:00.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:00.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:00.256 20:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:00.256 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:00.256 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:00.256 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:00.256 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:00.514 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:00.514 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:00.514 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:00.514 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:00.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:00.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:35:00.514 00:35:00.514 --- 10.0.0.2 ping statistics --- 00:35:00.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.514 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:35:00.514 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:00.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:00.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:35:00.514 00:35:00.514 --- 10.0.0.1 ping statistics --- 00:35:00.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.515 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3136978 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3136978 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3136978 ']' 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:00.515 20:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8957c903575d9d3cd7caec9e290c0809 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.sq4 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8957c903575d9d3cd7caec9e290c0809 0 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8957c903575d9d3cd7caec9e290c0809 0 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8957c903575d9d3cd7caec9e290c0809 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.sq4 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.sq4 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.sq4 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bdc9cd5383e18a448ce28f1051e40f30856f89871e2a6c5c3c25c43a7afac55d 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.QsK 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bdc9cd5383e18a448ce28f1051e40f30856f89871e2a6c5c3c25c43a7afac55d 3 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bdc9cd5383e18a448ce28f1051e40f30856f89871e2a6c5c3c25c43a7afac55d 3 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bdc9cd5383e18a448ce28f1051e40f30856f89871e2a6c5c3c25c43a7afac55d 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.QsK 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.QsK 00:35:01.477 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.QsK 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=07a41a0e073b466168b4b87ce03458e0c04e8a24eff63afe 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.kCD 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 07a41a0e073b466168b4b87ce03458e0c04e8a24eff63afe 0 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 07a41a0e073b466168b4b87ce03458e0c04e8a24eff63afe 0 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=07a41a0e073b466168b4b87ce03458e0c04e8a24eff63afe 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.kCD 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.kCD 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kCD 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=38106ae0a5a8ed92d0b8442e3954ab4919cdf88b89088440 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.699 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 38106ae0a5a8ed92d0b8442e3954ab4919cdf88b89088440 2 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 38106ae0a5a8ed92d0b8442e3954ab4919cdf88b89088440 2 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.736 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=38106ae0a5a8ed92d0b8442e3954ab4919cdf88b89088440 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.699 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.699 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.699 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=20a3641290d56acd92509ce0960c1199 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.VaX 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 20a3641290d56acd92509ce0960c1199 1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 20a3641290d56acd92509ce0960c1199 1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=20a3641290d56acd92509ce0960c1199 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.VaX 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.VaX 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.VaX 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=14ac283c9b7cc42a6420752f8868157a 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.T0I 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 14ac283c9b7cc42a6420752f8868157a 1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 14ac283c9b7cc42a6420752f8868157a 1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=14ac283c9b7cc42a6420752f8868157a 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.T0I 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.T0I 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.T0I 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9093a36da3ff3ec2f10581e1467cd0977fcec88bae5f63f5 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.hQP 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9093a36da3ff3ec2f10581e1467cd0977fcec88bae5f63f5 2 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9093a36da3ff3ec2f10581e1467cd0977fcec88bae5f63f5 2 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9093a36da3ff3ec2f10581e1467cd0977fcec88bae5f63f5 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.hQP 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.hQP 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hQP 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:01.737 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2db2b49c4728420e74cadc1e9d15a349 00:35:01.995 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:35:01.995 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.FQF 00:35:01.995 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2db2b49c4728420e74cadc1e9d15a349 0 00:35:01.995 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2db2b49c4728420e74cadc1e9d15a349 0 00:35:01.995 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.995 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.995 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2db2b49c4728420e74cadc1e9d15a349 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.FQF 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.FQF 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.FQF 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d667b51fe383e8f11d415b9cf684f0efc515a82396dd75fedeb5cb8b2b2cc166 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.ANA 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d667b51fe383e8f11d415b9cf684f0efc515a82396dd75fedeb5cb8b2b2cc166 3 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d667b51fe383e8f11d415b9cf684f0efc515a82396dd75fedeb5cb8b2b2cc166 3 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d667b51fe383e8f11d415b9cf684f0efc515a82396dd75fedeb5cb8b2b2cc166 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.ANA 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.ANA 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ANA 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3136978 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3136978 ']' 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.996 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sq4 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.QsK ]] 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QsK 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kCD 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.699 ]] 00:35:02.254 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.699 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.VaX 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.T0I ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T0I 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hQP 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.FQF ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.FQF 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ANA 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:35:02.255 20:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:35:02.255 20:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:02.255 20:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:03.188 Waiting for block devices as requested 00:35:03.446 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:03.446 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:03.704 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:03.704 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:03.704 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:03.962 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:03.962 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:03.962 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:03.962 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:04.219 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:04.219 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:04.219 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:04.219 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:04.477 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:04.477 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:04.477 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:04.477 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:05.043 No valid GPT data, bailing 00:35:05.043 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:05.044 00:35:05.044 Discovery Log Number of Records 2, Generation counter 2 00:35:05.044 =====Discovery Log Entry 0====== 00:35:05.044 trtype: tcp 00:35:05.044 adrfam: ipv4 00:35:05.044 subtype: current discovery subsystem 00:35:05.044 treq: not specified, sq flow control disable supported 00:35:05.044 portid: 1 00:35:05.044 trsvcid: 4420 00:35:05.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:05.044 traddr: 10.0.0.1 00:35:05.044 eflags: none 00:35:05.044 sectype: none 00:35:05.044 =====Discovery Log Entry 1====== 00:35:05.044 trtype: tcp 00:35:05.044 adrfam: ipv4 00:35:05.044 subtype: nvme subsystem 00:35:05.044 treq: not specified, sq flow control disable supported 00:35:05.044 portid: 1 00:35:05.044 trsvcid: 4420 00:35:05.044 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:05.044 traddr: 10.0.0.1 00:35:05.044 eflags: none 00:35:05.044 sectype: none 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.044 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.302 nvme0n1 00:35:05.302 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.302 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.302 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.302 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.302 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.302 20:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.302 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.303 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.561 nvme0n1 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.561 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.820 nvme0n1 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.820 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.078 nvme0n1 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.078 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.336 nvme0n1 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.336 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.337 20:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.596 nvme0n1 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.596 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.855 nvme0n1 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.855 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.135 nvme0n1 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.135 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.136 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.396 nvme0n1 00:35:07.396 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.396 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.396 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.396 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.396 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.396 20:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.396 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.655 nvme0n1 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.655 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.913 nvme0n1 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.913 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.172 nvme0n1 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.172 20:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.429 nvme0n1 00:35:08.429 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.429 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.429 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.429 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.429 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:08.686 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.687 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.945 nvme0n1 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.945 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.203 nvme0n1 00:35:09.203 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.203 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.203 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.203 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.203 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.203 20:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.203 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.203 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.203 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.203 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.203 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.461 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.720 nvme0n1 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.720 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.286 nvme0n1 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.286 20:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.286 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.852 nvme0n1 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.852 20:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.418 nvme0n1 00:35:11.418 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.418 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.418 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.418 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.418 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.418 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.676 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.242 nvme0n1 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.242 20:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.808 nvme0n1 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:12.808 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.809 20:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.742 nvme0n1 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:13.742 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.743 20:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.675 nvme0n1 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.675 20:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.609 nvme0n1 00:35:15.609 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.609 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.609 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.609 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.609 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.866 20:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 nvme0n1 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:16.799 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.800 20:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.733 nvme0n1 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.733 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.734 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:17.734 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.734 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:17.734 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:17.734 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:17.734 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:17.734 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.734 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.992 nvme0n1 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:17.992 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:17.993 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:17.993 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.993 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.251 nvme0n1 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.251 20:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.509 nvme0n1 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.509 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.510 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.768 nvme0n1 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.768 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.026 nvme0n1 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.026 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.286 nvme0n1 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.286 20:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.545 nvme0n1 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.545 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.803 nvme0n1 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.803 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.804 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.062 nvme0n1 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.062 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.320 nvme0n1 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.320 20:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.320 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.320 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:20.320 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.320 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.321 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.579 nvme0n1 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.579 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.580 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.145 nvme0n1 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:21.145 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.146 20:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.404 nvme0n1 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.404 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.662 nvme0n1 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.662 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.920 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.178 nvme0n1 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.178 20:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.744 nvme0n1 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:22.744 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:22.745 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.745 20:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.311 nvme0n1 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.311 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.877 nvme0n1 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:23.877 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.136 20:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.394 nvme0n1 00:35:24.394 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.394 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.394 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.394 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.394 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.394 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.652 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.219 nvme0n1 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.219 20:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.153 nvme0n1 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.153 20:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.088 nvme0n1 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.088 20:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.023 nvme0n1 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:28.023 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.283 20:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.218 nvme0n1 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.218 20:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.154 nvme0n1 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:30.154 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.155 nvme0n1 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.155 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.413 20:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.413 nvme0n1 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.413 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.671 nvme0n1 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.671 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.929 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.930 nvme0n1 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.930 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.188 nvme0n1 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:31.188 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.189 20:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.447 nvme0n1 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.447 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.709 nvme0n1 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:31.709 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.710 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.710 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.007 nvme0n1 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.007 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.289 nvme0n1 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.289 20:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:32.289 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:32.290 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:32.290 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.290 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.549 nvme0n1 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:32.549 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.550 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.807 nvme0n1 00:35:32.807 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.807 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.807 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.807 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.807 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.807 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.067 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.328 nvme0n1 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.328 20:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.328 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.588 nvme0n1 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:33.588 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.589 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.159 nvme0n1 00:35:34.159 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.160 20:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.420 nvme0n1 00:35:34.420 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.420 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.420 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.420 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.420 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.420 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.420 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.420 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.421 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.990 nvme0n1 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:34.990 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.991 20:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.559 nvme0n1 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.559 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.817 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.384 nvme0n1 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:36.384 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.385 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:36.385 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.385 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.385 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.385 20:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.385 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.955 nvme0n1 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.955 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.956 20:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.525 nvme0n1 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk1N2M5MDM1NzVkOWQzY2Q3Y2FlYzllMjkwYzA4MDmkKlZ6: 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRjOWNkNTM4M2UxOGE0NDhjZTI4ZjEwNTFlNDBmMzA4NTZmODk4NzFlMmE2YzVjM2MyNWM0M2E3YWZhYzU1ZK3UJFE=: 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.525 20:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.464 nvme0n1 00:35:38.464 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.464 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.464 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.464 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.464 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.464 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.724 20:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.663 nvme0n1 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.663 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.600 nvme0n1 00:35:40.600 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.600 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.600 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.600 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.600 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.600 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.859 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA5M2EzNmRhM2ZmM2VjMmYxMDU4MWUxNDY3Y2QwOTc3ZmNlYzg4YmFlNWY2M2Y1lgKp7A==: 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: ]] 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmRiMmI0OWM0NzI4NDIwZTc0Y2FkYzFlOWQxNWEzNDl7g8Ii: 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.860 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.797 nvme0n1 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.797 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY2N2I1MWZlMzgzZThmMTFkNDE1YjljZjY4NGYwZWZjNTE1YTgyMzk2ZGQ3NWZlZGViNWNiOGIyYjJjYzE2NsilwIs=: 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.798 20:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.735 nvme0n1 00:35:42.735 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.735 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.735 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.735 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.735 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.735 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.994 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.995 request: 00:35:42.995 { 00:35:42.995 "name": "nvme0", 00:35:42.995 "trtype": "tcp", 00:35:42.995 "traddr": "10.0.0.1", 00:35:42.995 "adrfam": "ipv4", 00:35:42.995 "trsvcid": "4420", 00:35:42.995 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:42.995 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:42.995 "prchk_reftag": false, 00:35:42.995 "prchk_guard": false, 00:35:42.995 "hdgst": false, 00:35:42.995 "ddgst": false, 00:35:42.995 "allow_unrecognized_csi": false, 00:35:42.995 "method": "bdev_nvme_attach_controller", 00:35:42.995 "req_id": 1 00:35:42.995 } 00:35:42.995 Got JSON-RPC error response 00:35:42.995 response: 00:35:42.995 { 00:35:42.995 "code": -5, 00:35:42.995 "message": "Input/output error" 00:35:42.995 } 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.995 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.254 request: 00:35:43.254 { 00:35:43.254 "name": "nvme0", 00:35:43.254 "trtype": "tcp", 00:35:43.254 "traddr": "10.0.0.1", 00:35:43.254 "adrfam": "ipv4", 00:35:43.254 "trsvcid": "4420", 00:35:43.254 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:43.254 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:43.254 "prchk_reftag": false, 00:35:43.254 "prchk_guard": false, 00:35:43.254 "hdgst": false, 00:35:43.254 "ddgst": false, 00:35:43.254 "dhchap_key": "key2", 00:35:43.254 "allow_unrecognized_csi": false, 00:35:43.254 "method": "bdev_nvme_attach_controller", 00:35:43.254 "req_id": 1 00:35:43.254 } 00:35:43.254 Got JSON-RPC error response 00:35:43.254 response: 00:35:43.254 { 00:35:43.254 "code": -5, 00:35:43.254 "message": "Input/output error" 00:35:43.254 } 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.254 request: 00:35:43.254 { 00:35:43.254 "name": "nvme0", 00:35:43.254 "trtype": "tcp", 00:35:43.254 "traddr": "10.0.0.1", 00:35:43.254 "adrfam": "ipv4", 00:35:43.254 "trsvcid": "4420", 00:35:43.254 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:43.254 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:43.254 "prchk_reftag": false, 00:35:43.254 "prchk_guard": false, 00:35:43.254 "hdgst": false, 00:35:43.254 "ddgst": false, 00:35:43.254 "dhchap_key": "key1", 00:35:43.254 "dhchap_ctrlr_key": "ckey2", 00:35:43.254 "allow_unrecognized_csi": false, 00:35:43.254 "method": "bdev_nvme_attach_controller", 00:35:43.254 "req_id": 1 00:35:43.254 } 00:35:43.254 Got JSON-RPC error response 00:35:43.254 response: 00:35:43.254 { 00:35:43.254 "code": -5, 00:35:43.254 "message": "Input/output error" 00:35:43.254 } 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.254 20:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 nvme0n1 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 request: 00:35:43.513 { 00:35:43.513 "name": "nvme0", 00:35:43.513 "dhchap_key": "key1", 00:35:43.513 "dhchap_ctrlr_key": "ckey2", 00:35:43.513 "method": "bdev_nvme_set_keys", 00:35:43.513 "req_id": 1 00:35:43.513 } 00:35:43.513 Got JSON-RPC error response 00:35:43.513 response: 00:35:43.513 { 00:35:43.513 "code": -13, 00:35:43.513 "message": "Permission denied" 00:35:43.513 } 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:43.513 20:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:44.896 20:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.896 20:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.896 20:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.896 20:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:44.896 20:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.896 20:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:44.896 20:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhNDFhMGUwNzNiNDY2MTY4YjRiODdjZTAzNDU4ZTBjMDRlOGEyNGVmZjYzYWZlU9ZK9g==: 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: ]] 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzgxMDZhZTBhNWE4ZWQ5MmQwYjg0NDJlMzk1NGFiNDkxOWNkZjg4Yjg5MDg4NDQw4deikg==: 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:45.832 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.833 nvme0n1 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBhMzY0MTI5MGQ1NmFjZDkyNTA5Y2UwOTYwYzExOTnNdAER: 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: ]] 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTRhYzI4M2M5YjdjYzQyYTY0MjA3NTJmODg2ODE1N2GsJTQd: 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.833 request: 00:35:45.833 { 00:35:45.833 "name": "nvme0", 00:35:45.833 "dhchap_key": "key2", 00:35:45.833 "dhchap_ctrlr_key": "ckey1", 00:35:45.833 "method": "bdev_nvme_set_keys", 00:35:45.833 "req_id": 1 00:35:45.833 } 00:35:45.833 Got JSON-RPC error response 00:35:45.833 response: 00:35:45.833 { 00:35:45.833 "code": -13, 00:35:45.833 "message": "Permission denied" 00:35:45.833 } 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.833 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.093 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:46.093 20:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:47.033 20:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.033 20:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:47.033 20:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.033 20:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.033 20:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.033 20:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:47.033 20:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:47.967 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:47.967 rmmod nvme_tcp 00:35:48.224 rmmod nvme_fabrics 00:35:48.224 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:48.224 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3136978 ']' 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3136978 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3136978 ']' 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3136978 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3136978 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3136978' 00:35:48.225 killing process with pid 3136978 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3136978 00:35:48.225 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3136978 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.159 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:35:51.066 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:35:51.325 20:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:52.699 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:52.699 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:52.699 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:52.699 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:52.699 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:52.699 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:52.699 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:52.699 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:52.699 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:52.699 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:52.699 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:52.699 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:52.699 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:52.699 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:52.699 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:52.699 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:53.634 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:53.634 20:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.sq4 /tmp/spdk.key-null.kCD /tmp/spdk.key-sha256.VaX /tmp/spdk.key-sha384.hQP /tmp/spdk.key-sha512.ANA /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:53.634 20:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:54.570 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:54.570 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:54.570 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:54.570 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:54.570 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:54.570 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:54.570 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:54.570 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:54.570 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:54.570 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:54.570 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:54.570 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:54.570 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:54.570 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:54.570 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:54.570 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:54.570 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:54.828 00:35:54.828 real 0m56.611s 00:35:54.828 user 0m54.401s 00:35:54.828 sys 0m6.275s 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.828 ************************************ 00:35:54.828 END TEST nvmf_auth_host 00:35:54.828 ************************************ 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.828 ************************************ 00:35:54.828 START TEST nvmf_digest 00:35:54.828 ************************************ 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:54.828 * Looking for test storage... 00:35:54.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:35:54.828 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.087 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:55.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.088 --rc genhtml_branch_coverage=1 00:35:55.088 --rc genhtml_function_coverage=1 00:35:55.088 --rc genhtml_legend=1 00:35:55.088 --rc geninfo_all_blocks=1 00:35:55.088 --rc geninfo_unexecuted_blocks=1 00:35:55.088 00:35:55.088 ' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:55.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.088 --rc genhtml_branch_coverage=1 00:35:55.088 --rc genhtml_function_coverage=1 00:35:55.088 --rc genhtml_legend=1 00:35:55.088 --rc geninfo_all_blocks=1 00:35:55.088 --rc geninfo_unexecuted_blocks=1 00:35:55.088 00:35:55.088 ' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:55.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.088 --rc genhtml_branch_coverage=1 00:35:55.088 --rc genhtml_function_coverage=1 00:35:55.088 --rc genhtml_legend=1 00:35:55.088 --rc geninfo_all_blocks=1 00:35:55.088 --rc geninfo_unexecuted_blocks=1 00:35:55.088 00:35:55.088 ' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:55.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.088 --rc genhtml_branch_coverage=1 00:35:55.088 --rc genhtml_function_coverage=1 00:35:55.088 --rc genhtml_legend=1 00:35:55.088 --rc geninfo_all_blocks=1 00:35:55.088 --rc geninfo_unexecuted_blocks=1 00:35:55.088 00:35:55.088 ' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:55.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:55.088 20:05:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:56.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.994 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:56.995 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:56.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:56.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:56.995 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:57.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:57.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:35:57.254 00:35:57.254 --- 10.0.0.2 ping statistics --- 00:35:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.254 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:57.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:57.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:35:57.254 00:35:57.254 --- 10.0.0.1 ping statistics --- 00:35:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.254 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.254 ************************************ 00:35:57.254 START TEST nvmf_digest_clean 00:35:57.254 ************************************ 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:57.254 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=3147241 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 3147241 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3147241 ']' 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:57.255 20:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:57.255 [2024-10-13 20:05:47.024036] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:35:57.255 [2024-10-13 20:05:47.024182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:57.513 [2024-10-13 20:05:47.162854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.513 [2024-10-13 20:05:47.296211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:57.513 [2024-10-13 20:05:47.296300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:57.513 [2024-10-13 20:05:47.296337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:57.513 [2024-10-13 20:05:47.296362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:57.513 [2024-10-13 20:05:47.296383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:57.513 [2024-10-13 20:05:47.298053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.451 20:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:58.451 20:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:58.451 20:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:58.451 20:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:58.451 20:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:58.451 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:58.451 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:58.451 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:58.451 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:58.451 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.451 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:58.751 null0 00:35:58.751 [2024-10-13 20:05:48.419472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:58.751 [2024-10-13 20:05:48.443825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3147401 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3147401 /var/tmp/bperf.sock 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3147401 ']' 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:58.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:58.751 20:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:58.751 [2024-10-13 20:05:48.531434] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:35:58.751 [2024-10-13 20:05:48.531589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147401 ] 00:35:59.031 [2024-10-13 20:05:48.672534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.031 [2024-10-13 20:05:48.814310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.967 20:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:59.967 20:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:59.967 20:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:59.967 20:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:59.967 20:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:00.534 20:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:00.534 20:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.102 nvme0n1 00:36:01.102 20:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:01.102 20:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:01.102 Running I/O for 2 seconds... 00:36:02.974 13443.00 IOPS, 52.51 MiB/s [2024-10-13T18:05:53.048Z] 13411.00 IOPS, 52.39 MiB/s 00:36:03.233 Latency(us) 00:36:03.233 [2024-10-13T18:05:53.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.233 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:03.233 nvme0n1 : 2.04 13180.22 51.49 0.00 0.00 9516.70 4878.79 46797.56 00:36:03.233 [2024-10-13T18:05:53.048Z] =================================================================================================================== 00:36:03.233 [2024-10-13T18:05:53.048Z] Total : 13180.22 51.49 0.00 0.00 9516.70 4878.79 46797.56 00:36:03.233 { 00:36:03.233 "results": [ 00:36:03.233 { 00:36:03.233 "job": "nvme0n1", 00:36:03.233 "core_mask": "0x2", 00:36:03.233 "workload": "randread", 00:36:03.233 "status": "finished", 00:36:03.233 "queue_depth": 128, 00:36:03.233 "io_size": 4096, 00:36:03.233 "runtime": 2.044731, 00:36:03.233 "iops": 13180.21783794543, 00:36:03.233 "mibps": 51.48522592947434, 00:36:03.233 "io_failed": 0, 00:36:03.233 "io_timeout": 0, 00:36:03.233 "avg_latency_us": 9516.701733855563, 00:36:03.233 "min_latency_us": 4878.791111111111, 00:36:03.233 "max_latency_us": 46797.55851851852 00:36:03.233 } 00:36:03.233 ], 00:36:03.233 "core_count": 1 00:36:03.233 } 00:36:03.233 20:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:03.233 20:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:03.233 20:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:03.233 20:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:03.233 | select(.opcode=="crc32c") 00:36:03.233 | "\(.module_name) \(.executed)"' 00:36:03.233 20:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3147401 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3147401 ']' 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3147401 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3147401 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3147401' 00:36:03.491 killing process with pid 3147401 00:36:03.491 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3147401 00:36:03.491 Received shutdown signal, test time was about 2.000000 seconds 00:36:03.491 00:36:03.491 Latency(us) 00:36:03.491 [2024-10-13T18:05:53.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.491 [2024-10-13T18:05:53.307Z] =================================================================================================================== 00:36:03.492 [2024-10-13T18:05:53.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:03.492 20:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3147401 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3148065 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3148065 /var/tmp/bperf.sock 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3148065 ']' 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:04.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:04.426 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:04.427 20:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:04.427 [2024-10-13 20:05:54.160183] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:04.427 [2024-10-13 20:05:54.160331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148065 ] 00:36:04.427 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:04.427 Zero copy mechanism will not be used. 00:36:04.686 [2024-10-13 20:05:54.295636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.686 [2024-10-13 20:05:54.438435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.622 20:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:05.622 20:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:05.622 20:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:05.622 20:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:05.622 20:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:06.188 20:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.188 20:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.755 nvme0n1 00:36:06.755 20:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:06.756 20:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:06.756 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:06.756 Zero copy mechanism will not be used. 00:36:06.756 Running I/O for 2 seconds... 00:36:08.644 4832.00 IOPS, 604.00 MiB/s [2024-10-13T18:05:58.459Z] 4792.00 IOPS, 599.00 MiB/s 00:36:08.644 Latency(us) 00:36:08.644 [2024-10-13T18:05:58.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.644 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:08.644 nvme0n1 : 2.00 4792.05 599.01 0.00 0.00 3332.23 1037.65 7961.41 00:36:08.644 [2024-10-13T18:05:58.459Z] =================================================================================================================== 00:36:08.644 [2024-10-13T18:05:58.459Z] Total : 4792.05 599.01 0.00 0.00 3332.23 1037.65 7961.41 00:36:08.644 { 00:36:08.644 "results": [ 00:36:08.644 { 00:36:08.644 "job": "nvme0n1", 00:36:08.644 "core_mask": "0x2", 00:36:08.644 "workload": "randread", 00:36:08.644 "status": "finished", 00:36:08.644 "queue_depth": 16, 00:36:08.644 "io_size": 131072, 00:36:08.644 "runtime": 2.00332, 00:36:08.644 "iops": 4792.045204959767, 00:36:08.644 "mibps": 599.0056506199709, 00:36:08.644 "io_failed": 0, 00:36:08.644 "io_timeout": 0, 00:36:08.644 "avg_latency_us": 3332.23095308642, 00:36:08.644 "min_latency_us": 1037.6533333333334, 00:36:08.644 "max_latency_us": 7961.41037037037 00:36:08.644 } 00:36:08.644 ], 00:36:08.644 "core_count": 1 00:36:08.644 } 00:36:08.644 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:08.644 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:08.644 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:08.644 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:08.644 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:08.644 | select(.opcode=="crc32c") 00:36:08.644 | "\(.module_name) \(.executed)"' 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3148065 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3148065 ']' 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3148065 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:08.902 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148065 00:36:09.161 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:09.162 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:09.162 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148065' 00:36:09.162 killing process with pid 3148065 00:36:09.162 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3148065 00:36:09.162 Received shutdown signal, test time was about 2.000000 seconds 00:36:09.162 00:36:09.162 Latency(us) 00:36:09.162 [2024-10-13T18:05:58.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.162 [2024-10-13T18:05:58.977Z] =================================================================================================================== 00:36:09.162 [2024-10-13T18:05:58.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:09.162 20:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3148065 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3148733 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3148733 /var/tmp/bperf.sock 00:36:10.101 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3148733 ']' 00:36:10.102 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.102 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:10.102 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.102 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:10.102 20:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.102 [2024-10-13 20:05:59.729447] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:10.102 [2024-10-13 20:05:59.729603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148733 ] 00:36:10.102 [2024-10-13 20:05:59.865675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.360 [2024-10-13 20:06:00.008712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.927 20:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:10.927 20:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:10.927 20:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:10.927 20:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:10.927 20:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:11.864 20:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.864 20:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.122 nvme0n1 00:36:12.122 20:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:12.122 20:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:12.381 Running I/O for 2 seconds... 00:36:14.253 16443.00 IOPS, 64.23 MiB/s [2024-10-13T18:06:04.068Z] 16727.00 IOPS, 65.34 MiB/s 00:36:14.253 Latency(us) 00:36:14.253 [2024-10-13T18:06:04.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.253 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:14.253 nvme0n1 : 2.00 16757.21 65.46 0.00 0.00 7628.68 3470.98 13301.38 00:36:14.253 [2024-10-13T18:06:04.068Z] =================================================================================================================== 00:36:14.253 [2024-10-13T18:06:04.068Z] Total : 16757.21 65.46 0.00 0.00 7628.68 3470.98 13301.38 00:36:14.253 { 00:36:14.253 "results": [ 00:36:14.253 { 00:36:14.253 "job": "nvme0n1", 00:36:14.253 "core_mask": "0x2", 00:36:14.253 "workload": "randwrite", 00:36:14.253 "status": "finished", 00:36:14.253 "queue_depth": 128, 00:36:14.253 "io_size": 4096, 00:36:14.253 "runtime": 2.004033, 00:36:14.253 "iops": 16757.2090878743, 00:36:14.253 "mibps": 65.45784799950899, 00:36:14.253 "io_failed": 0, 00:36:14.253 "io_timeout": 0, 00:36:14.253 "avg_latency_us": 7628.679543229728, 00:36:14.253 "min_latency_us": 3470.9807407407407, 00:36:14.253 "max_latency_us": 13301.38074074074 00:36:14.253 } 00:36:14.253 ], 00:36:14.253 "core_count": 1 00:36:14.253 } 00:36:14.253 20:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:14.253 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:14.253 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:14.253 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:14.253 | select(.opcode=="crc32c") 00:36:14.253 | "\(.module_name) \(.executed)"' 00:36:14.253 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3148733 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3148733 ']' 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3148733 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:14.511 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148733 00:36:14.784 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:14.784 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:14.785 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148733' 00:36:14.785 killing process with pid 3148733 00:36:14.785 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3148733 00:36:14.785 Received shutdown signal, test time was about 2.000000 seconds 00:36:14.785 00:36:14.785 Latency(us) 00:36:14.785 [2024-10-13T18:06:04.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.785 [2024-10-13T18:06:04.600Z] =================================================================================================================== 00:36:14.785 [2024-10-13T18:06:04.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:14.785 20:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3148733 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3149504 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3149504 /var/tmp/bperf.sock 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3149504 ']' 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:15.730 20:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.730 [2024-10-13 20:06:05.290846] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:15.730 [2024-10-13 20:06:05.291000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149504 ] 00:36:15.730 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:15.730 Zero copy mechanism will not be used. 00:36:15.730 [2024-10-13 20:06:05.424534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.990 [2024-10-13 20:06:05.556115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.557 20:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:16.557 20:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:16.557 20:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:16.557 20:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:16.557 20:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:17.494 20:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.494 20:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.754 nvme0n1 00:36:17.754 20:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:17.754 20:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:17.754 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:17.754 Zero copy mechanism will not be used. 00:36:17.754 Running I/O for 2 seconds... 00:36:20.071 4448.00 IOPS, 556.00 MiB/s [2024-10-13T18:06:09.886Z] 4448.00 IOPS, 556.00 MiB/s 00:36:20.071 Latency(us) 00:36:20.071 [2024-10-13T18:06:09.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.071 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:20.071 nvme0n1 : 2.00 4447.02 555.88 0.00 0.00 3587.70 2500.08 8107.05 00:36:20.071 [2024-10-13T18:06:09.886Z] =================================================================================================================== 00:36:20.071 [2024-10-13T18:06:09.886Z] Total : 4447.02 555.88 0.00 0.00 3587.70 2500.08 8107.05 00:36:20.071 { 00:36:20.071 "results": [ 00:36:20.071 { 00:36:20.071 "job": "nvme0n1", 00:36:20.071 "core_mask": "0x2", 00:36:20.071 "workload": "randwrite", 00:36:20.071 "status": "finished", 00:36:20.071 "queue_depth": 16, 00:36:20.071 "io_size": 131072, 00:36:20.071 "runtime": 2.004712, 00:36:20.071 "iops": 4447.022814249628, 00:36:20.071 "mibps": 555.8778517812035, 00:36:20.071 "io_failed": 0, 00:36:20.071 "io_timeout": 0, 00:36:20.071 "avg_latency_us": 3587.7049566897235, 00:36:20.071 "min_latency_us": 2500.077037037037, 00:36:20.071 "max_latency_us": 8107.045925925926 00:36:20.071 } 00:36:20.071 ], 00:36:20.071 "core_count": 1 00:36:20.071 } 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:20.071 | select(.opcode=="crc32c") 00:36:20.071 | "\(.module_name) \(.executed)"' 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3149504 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3149504 ']' 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3149504 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:20.071 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3149504 00:36:20.329 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:20.329 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:20.329 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3149504' 00:36:20.329 killing process with pid 3149504 00:36:20.329 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3149504 00:36:20.329 Received shutdown signal, test time was about 2.000000 seconds 00:36:20.329 00:36:20.329 Latency(us) 00:36:20.329 [2024-10-13T18:06:10.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.329 [2024-10-13T18:06:10.144Z] =================================================================================================================== 00:36:20.329 [2024-10-13T18:06:10.144Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.329 20:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3149504 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3147241 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3147241 ']' 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3147241 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3147241 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3147241' 00:36:21.266 killing process with pid 3147241 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3147241 00:36:21.266 20:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3147241 00:36:22.206 00:36:22.206 real 0m25.086s 00:36:22.206 user 0m49.467s 00:36:22.206 sys 0m4.676s 00:36:22.206 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:22.206 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:22.206 ************************************ 00:36:22.206 END TEST nvmf_digest_clean 00:36:22.206 ************************************ 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.464 ************************************ 00:36:22.464 START TEST nvmf_digest_error 00:36:22.464 ************************************ 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=3150838 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 3150838 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3150838 ']' 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:22.464 20:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:22.464 [2024-10-13 20:06:12.166915] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:22.464 [2024-10-13 20:06:12.167065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.724 [2024-10-13 20:06:12.307683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.724 [2024-10-13 20:06:12.449888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.724 [2024-10-13 20:06:12.449984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.724 [2024-10-13 20:06:12.450010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.724 [2024-10-13 20:06:12.450036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.724 [2024-10-13 20:06:12.450057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.724 [2024-10-13 20:06:12.451760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:23.662 [2024-10-13 20:06:13.210642] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.662 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:23.921 null0 00:36:23.921 [2024-10-13 20:06:13.608882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.921 [2024-10-13 20:06:13.633242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3150997 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3150997 /var/tmp/bperf.sock 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3150997 ']' 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:23.921 20:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:23.921 [2024-10-13 20:06:13.730707] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:23.921 [2024-10-13 20:06:13.730863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150997 ] 00:36:24.181 [2024-10-13 20:06:13.878026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.440 [2024-10-13 20:06:14.017596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.013 20:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:25.013 20:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:25.013 20:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:25.013 20:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:25.308 20:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:25.308 20:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.308 20:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:25.308 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.308 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.309 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.589 nvme0n1 00:36:25.589 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:25.589 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.589 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:25.589 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.589 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:25.590 20:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:25.850 Running I/O for 2 seconds... 00:36:25.850 [2024-10-13 20:06:15.541032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.850 [2024-10-13 20:06:15.541113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.850 [2024-10-13 20:06:15.541148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.850 [2024-10-13 20:06:15.559711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.850 [2024-10-13 20:06:15.559775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.850 [2024-10-13 20:06:15.559806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.850 [2024-10-13 20:06:15.577974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.850 [2024-10-13 20:06:15.578027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.850 [2024-10-13 20:06:15.578058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.850 [2024-10-13 20:06:15.598250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.850 [2024-10-13 20:06:15.598302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.850 [2024-10-13 20:06:15.598332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.850 [2024-10-13 20:06:15.616478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.850 [2024-10-13 20:06:15.616522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.850 [2024-10-13 20:06:15.616563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.850 [2024-10-13 20:06:15.638724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.850 [2024-10-13 20:06:15.638781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.850 [2024-10-13 20:06:15.638824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.850 [2024-10-13 20:06:15.658969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.850 [2024-10-13 20:06:15.659018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.850 [2024-10-13 20:06:15.659048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.683209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.683261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.683292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.701622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.701664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.701689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.718622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.718664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.718719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.736991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.737040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.737071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.756547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.756587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.756612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.775648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.775723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.775756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.791289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.791339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.791368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.809385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.809465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.809498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.830354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.830413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.830457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.850369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.850442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.850483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.868002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.868051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.868110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.889482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.889527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.889569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.111 [2024-10-13 20:06:15.905833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.111 [2024-10-13 20:06:15.905883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.111 [2024-10-13 20:06:15.905913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.371 [2024-10-13 20:06:15.926614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.371 [2024-10-13 20:06:15.926674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.371 [2024-10-13 20:06:15.926716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:15.947564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:15.947623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:15.947649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:15.966705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:15.966756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:15.966794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:15.986680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:15.986739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:15.986770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.005611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.005656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.005683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.020153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.020202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.020232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.042090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.042170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.063023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.063073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.063103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.077571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.077612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.077636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.099869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.099920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.099949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.117838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.117888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.117919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.134893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.134950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.134980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.156650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.156708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.156734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.372 [2024-10-13 20:06:16.176974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.372 [2024-10-13 20:06:16.177022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.372 [2024-10-13 20:06:16.177052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.193689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.193754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.193784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.213098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.213149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.213179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.229764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.229813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.229843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.250715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.250764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.250794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.270295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.270344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.270374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.286001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.286051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.286090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.306253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.306303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.306334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.326280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.326330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.326360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.342600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.342642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.342668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.361232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.361283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.361313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.382723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.382773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.382802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.400520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.400563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.400589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.416002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.416054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.416093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.632 [2024-10-13 20:06:16.433887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.632 [2024-10-13 20:06:16.433937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.632 [2024-10-13 20:06:16.433967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.453349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.453420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.453473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.474542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.474588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.474629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.493927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.493977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.494006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.510353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.510420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.510469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 13264.00 IOPS, 51.81 MiB/s [2024-10-13T18:06:16.706Z] [2024-10-13 20:06:16.528355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.528428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.528472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.552156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.552216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.552245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.571186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.571247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.571277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.588587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.588633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.588661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.608818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.608877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.608917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.626070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.626119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.626152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.648816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.891 [2024-10-13 20:06:16.648867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.891 [2024-10-13 20:06:16.648897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.891 [2024-10-13 20:06:16.665528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.892 [2024-10-13 20:06:16.665571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.892 [2024-10-13 20:06:16.665606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.892 [2024-10-13 20:06:16.684104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.892 [2024-10-13 20:06:16.684163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.892 [2024-10-13 20:06:16.684193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.892 [2024-10-13 20:06:16.701457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.892 [2024-10-13 20:06:16.701497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.892 [2024-10-13 20:06:16.701531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.150 [2024-10-13 20:06:16.717386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.150 [2024-10-13 20:06:16.717456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.150 [2024-10-13 20:06:16.717481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.150 [2024-10-13 20:06:16.736113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.150 [2024-10-13 20:06:16.736172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.150 [2024-10-13 20:06:16.736203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.150 [2024-10-13 20:06:16.760973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.150 [2024-10-13 20:06:16.761024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.150 [2024-10-13 20:06:16.761053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.150 [2024-10-13 20:06:16.781818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.150 [2024-10-13 20:06:16.781880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.150 [2024-10-13 20:06:16.781918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.150 [2024-10-13 20:06:16.798976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.150 [2024-10-13 20:06:16.799027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.150 [2024-10-13 20:06:16.799057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.151 [2024-10-13 20:06:16.819782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.151 [2024-10-13 20:06:16.819832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.151 [2024-10-13 20:06:16.819861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.151 [2024-10-13 20:06:16.843154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.151 [2024-10-13 20:06:16.843213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.151 [2024-10-13 20:06:16.843243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.151 [2024-10-13 20:06:16.864139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.151 [2024-10-13 20:06:16.864189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.151 [2024-10-13 20:06:16.864227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.151 [2024-10-13 20:06:16.880314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.151 [2024-10-13 20:06:16.880364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.151 [2024-10-13 20:06:16.880413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.151 [2024-10-13 20:06:16.898840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.151 [2024-10-13 20:06:16.898914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.151 [2024-10-13 20:06:16.898944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.151 [2024-10-13 20:06:16.917629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.151 [2024-10-13 20:06:16.917670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.151 [2024-10-13 20:06:16.917717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.151 [2024-10-13 20:06:16.935325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.151 [2024-10-13 20:06:16.935384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.151 [2024-10-13 20:06:16.935466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.151 [2024-10-13 20:06:16.953102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.151 [2024-10-13 20:06:16.953158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.151 [2024-10-13 20:06:16.953189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:16.973948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:16.974006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:16.974036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:16.990071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:16.990121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:16.990155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:17.009275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:17.009325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:17.009362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:17.026001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:17.026060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:17.026090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:17.043462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:17.043504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:17.043529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:17.063774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:17.063834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:17.063863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:17.084292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:17.084342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:17.084372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:17.100099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:17.100156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:17.100192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:17.120303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:17.120352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:17.120385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.409 [2024-10-13 20:06:17.139780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.409 [2024-10-13 20:06:17.139839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.409 [2024-10-13 20:06:17.139868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.410 [2024-10-13 20:06:17.156526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.410 [2024-10-13 20:06:17.156567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.410 [2024-10-13 20:06:17.156597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.410 [2024-10-13 20:06:17.176890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.410 [2024-10-13 20:06:17.176950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.410 [2024-10-13 20:06:17.176980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.410 [2024-10-13 20:06:17.196079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.410 [2024-10-13 20:06:17.196127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.410 [2024-10-13 20:06:17.196167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.410 [2024-10-13 20:06:17.213128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.410 [2024-10-13 20:06:17.213187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.410 [2024-10-13 20:06:17.213216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.232324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.232375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.232437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.252372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.252450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.252482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.274346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.274411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.274456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.294036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.294093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.294123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.310595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.310635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.310668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.327539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.327583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.327616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.344967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.345015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.345045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.362615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.362673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.362699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.380151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.380200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.380229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.397813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.397871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.397901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.415300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.415356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.415386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.668 [2024-10-13 20:06:17.434684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.668 [2024-10-13 20:06:17.434758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.668 [2024-10-13 20:06:17.434788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.669 [2024-10-13 20:06:17.452502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.669 [2024-10-13 20:06:17.452541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.669 [2024-10-13 20:06:17.452565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.669 [2024-10-13 20:06:17.469362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.669 [2024-10-13 20:06:17.469421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.669 [2024-10-13 20:06:17.469454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.927 [2024-10-13 20:06:17.486729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.927 [2024-10-13 20:06:17.486781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.927 [2024-10-13 20:06:17.486811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.927 [2024-10-13 20:06:17.504211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.927 [2024-10-13 20:06:17.504269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.927 [2024-10-13 20:06:17.504298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.927 13461.00 IOPS, 52.58 MiB/s [2024-10-13T18:06:17.742Z] [2024-10-13 20:06:17.520780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.927 [2024-10-13 20:06:17.520873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.927 [2024-10-13 20:06:17.520928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.927 00:36:27.927 Latency(us) 00:36:27.927 [2024-10-13T18:06:17.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.927 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:27.927 nvme0n1 : 2.01 13488.15 52.69 0.00 0.00 9473.75 4708.88 25631.86 00:36:27.927 [2024-10-13T18:06:17.742Z] =================================================================================================================== 00:36:27.927 [2024-10-13T18:06:17.742Z] Total : 13488.15 52.69 0.00 0.00 9473.75 4708.88 25631.86 00:36:27.927 { 00:36:27.927 "results": [ 00:36:27.927 { 00:36:27.927 "job": "nvme0n1", 00:36:27.927 "core_mask": "0x2", 00:36:27.927 "workload": "randread", 00:36:27.927 "status": "finished", 00:36:27.927 "queue_depth": 128, 00:36:27.927 "io_size": 4096, 00:36:27.927 "runtime": 2.00754, 00:36:27.927 "iops": 13488.149675722527, 00:36:27.927 "mibps": 52.68808467079112, 00:36:27.927 "io_failed": 0, 00:36:27.927 "io_timeout": 0, 00:36:27.927 "avg_latency_us": 9473.745813931222, 00:36:27.927 "min_latency_us": 4708.882962962963, 00:36:27.927 "max_latency_us": 25631.85777777778 00:36:27.927 } 00:36:27.927 ], 00:36:27.927 "core_count": 1 00:36:27.927 } 00:36:27.927 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:27.927 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:27.927 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:27.927 | .driver_specific 00:36:27.927 | .nvme_error 00:36:27.927 | .status_code 00:36:27.927 | .command_transient_transport_error' 00:36:27.927 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 106 > 0 )) 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3150997 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3150997 ']' 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3150997 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3150997 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3150997' 00:36:28.186 killing process with pid 3150997 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3150997 00:36:28.186 Received shutdown signal, test time was about 2.000000 seconds 00:36:28.186 00:36:28.186 Latency(us) 00:36:28.186 [2024-10-13T18:06:18.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.186 [2024-10-13T18:06:18.001Z] =================================================================================================================== 00:36:28.186 [2024-10-13T18:06:18.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:28.186 20:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3150997 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3151661 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3151661 /var/tmp/bperf.sock 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3151661 ']' 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:29.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:29.124 20:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:29.124 [2024-10-13 20:06:18.850645] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:29.124 [2024-10-13 20:06:18.850787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151661 ] 00:36:29.124 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:29.124 Zero copy mechanism will not be used. 00:36:29.382 [2024-10-13 20:06:18.987792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.382 [2024-10-13 20:06:19.128303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.319 20:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:30.319 20:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:30.319 20:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:30.319 20:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:30.578 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:30.578 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.578 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:30.578 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.578 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.578 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.838 nvme0n1 00:36:30.838 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:30.838 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.838 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:30.838 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.838 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:30.838 20:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:30.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:30.838 Zero copy mechanism will not be used. 00:36:30.838 Running I/O for 2 seconds... 00:36:30.838 [2024-10-13 20:06:20.638526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:30.838 [2024-10-13 20:06:20.638618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.838 [2024-10-13 20:06:20.638660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:30.838 [2024-10-13 20:06:20.645556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:30.838 [2024-10-13 20:06:20.645601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.838 [2024-10-13 20:06:20.645629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:30.838 [2024-10-13 20:06:20.652374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:30.838 [2024-10-13 20:06:20.652433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.838 [2024-10-13 20:06:20.652478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.658873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.658922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.658952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.665269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.665316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.665346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.671587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.671630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.671656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.678980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.679028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.679057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.684229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.684275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.684305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.690490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.690534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.690561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.694908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.694963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.695011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.700138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.700185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.700214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.705565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.705606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.705633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.709715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.709757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.709784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.715352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.715403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.715446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.721450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.721494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.721520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.725974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.726016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.726043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.730826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.730869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.730895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.735778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.735820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.735854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.740121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.740162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.740187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.743610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.743668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.743695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.748706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.748750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.748776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.752925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.752967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.752993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.757386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.757434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.098 [2024-10-13 20:06:20.757460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.098 [2024-10-13 20:06:20.761612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.098 [2024-10-13 20:06:20.761654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.761680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.766661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.766705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.766732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.770903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.770946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.770973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.776118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.776174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.776204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.782648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.782692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.782718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.789799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.789847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.789878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.796755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.796804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.796834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.804797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.804848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.804877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.811758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.811807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.811836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.818268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.818316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.818346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.824616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.824658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.824684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.830963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.831011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.831039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.837145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.837192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.837220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.843323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.843370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.843405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.849567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.849609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.849636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.855743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.855789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.855818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.861991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.862038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.862067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.868263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.868310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.868338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.874774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.874822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.874850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.881470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.881523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.881551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.888109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.888172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.888203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.894670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.894735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.894765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.901262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.901312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.901342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.099 [2024-10-13 20:06:20.907725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.099 [2024-10-13 20:06:20.907774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.099 [2024-10-13 20:06:20.907802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.360 [2024-10-13 20:06:20.914246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.360 [2024-10-13 20:06:20.914296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.360 [2024-10-13 20:06:20.914325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.360 [2024-10-13 20:06:20.920607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.360 [2024-10-13 20:06:20.920662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.360 [2024-10-13 20:06:20.920688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.360 [2024-10-13 20:06:20.927025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.360 [2024-10-13 20:06:20.927073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.360 [2024-10-13 20:06:20.927102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.360 [2024-10-13 20:06:20.933370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.360 [2024-10-13 20:06:20.933439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.360 [2024-10-13 20:06:20.933467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.360 [2024-10-13 20:06:20.939801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.360 [2024-10-13 20:06:20.939845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.360 [2024-10-13 20:06:20.939870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.360 [2024-10-13 20:06:20.945949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.360 [2024-10-13 20:06:20.946011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.360 [2024-10-13 20:06:20.946038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.360 [2024-10-13 20:06:20.951918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.360 [2024-10-13 20:06:20.951962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.360 [2024-10-13 20:06:20.951988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.360 [2024-10-13 20:06:20.957797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:20.957841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:20.957865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:20.963790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:20.963834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:20.963860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:20.969955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:20.969998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:20.970023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:20.976108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:20.976151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:20.976177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:20.982264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:20.982307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:20.982333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:20.988348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:20.988415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:20.988452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:20.994316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:20.994385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:20.994420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.000793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.000836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.000862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.006821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.006864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.006890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.012870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.012915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.012941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.018935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.018979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.019005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.024941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.024985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.025011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.030955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.030998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.031025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.036987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.037031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.037057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.042956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.043000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.043026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.049010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.049054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.049080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.055100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.055145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.055171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.061173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.061216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.061242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.067055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.067099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.067125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.072993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.073035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.073061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.078987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.079031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.079057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.084616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.084659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.084685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.088441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.088483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.088510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.093816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.093860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.093898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.101316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.101362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.101388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.110054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.110101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.110128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.118052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.118097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.118123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.125118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.125163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.361 [2024-10-13 20:06:21.125191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.361 [2024-10-13 20:06:21.132989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.361 [2024-10-13 20:06:21.133034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.362 [2024-10-13 20:06:21.133061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.362 [2024-10-13 20:06:21.142165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.362 [2024-10-13 20:06:21.142210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.362 [2024-10-13 20:06:21.142237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.362 [2024-10-13 20:06:21.151220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.362 [2024-10-13 20:06:21.151267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.362 [2024-10-13 20:06:21.151293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.362 [2024-10-13 20:06:21.160469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.362 [2024-10-13 20:06:21.160518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.362 [2024-10-13 20:06:21.160546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.362 [2024-10-13 20:06:21.169599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.362 [2024-10-13 20:06:21.169646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.362 [2024-10-13 20:06:21.169674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.622 [2024-10-13 20:06:21.177819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.622 [2024-10-13 20:06:21.177881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.622 [2024-10-13 20:06:21.177909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.622 [2024-10-13 20:06:21.186859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.622 [2024-10-13 20:06:21.186904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.622 [2024-10-13 20:06:21.186931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.622 [2024-10-13 20:06:21.196126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.622 [2024-10-13 20:06:21.196189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.622 [2024-10-13 20:06:21.196216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.622 [2024-10-13 20:06:21.205418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.622 [2024-10-13 20:06:21.205465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.622 [2024-10-13 20:06:21.205509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.214048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.214095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.214122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.222670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.222717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.222743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.230440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.230487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.230530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.237608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.237654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.237711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.244663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.244707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.244734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.251377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.251428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.251455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.258846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.258894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.258921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.266021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.266066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.266093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.273013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.273058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.273086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.279569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.279616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.279643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.285959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.286004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.286032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.292526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.292571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.292599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.298960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.299003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.299029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.305181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.305226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.305252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.310000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.310043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.310069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.318565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.318610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.318638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.325422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.325466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.325493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.331865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.331910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.331937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.339074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.339116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.339141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.344831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.344872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.344897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.351092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.351135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.351179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.357064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.357106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.357131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.364035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.364094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.364121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.369042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.369084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.369109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.375059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.375101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.375126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.380998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.381040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.381065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.386921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.386964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.386990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.392772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.623 [2024-10-13 20:06:21.392815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.623 [2024-10-13 20:06:21.392841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.623 [2024-10-13 20:06:21.398788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.624 [2024-10-13 20:06:21.398833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.624 [2024-10-13 20:06:21.398858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.624 [2024-10-13 20:06:21.404749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.624 [2024-10-13 20:06:21.404791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.624 [2024-10-13 20:06:21.404817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.624 [2024-10-13 20:06:21.410660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.624 [2024-10-13 20:06:21.410718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.624 [2024-10-13 20:06:21.410745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.624 [2024-10-13 20:06:21.417483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.624 [2024-10-13 20:06:21.417528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.624 [2024-10-13 20:06:21.417554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.624 [2024-10-13 20:06:21.422294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.624 [2024-10-13 20:06:21.422336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.624 [2024-10-13 20:06:21.422360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.624 [2024-10-13 20:06:21.428096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.624 [2024-10-13 20:06:21.428138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.624 [2024-10-13 20:06:21.428163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.624 [2024-10-13 20:06:21.433942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.624 [2024-10-13 20:06:21.433984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.624 [2024-10-13 20:06:21.434008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.884 [2024-10-13 20:06:21.439801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.884 [2024-10-13 20:06:21.439842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.884 [2024-10-13 20:06:21.439867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.884 [2024-10-13 20:06:21.446523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.884 [2024-10-13 20:06:21.446568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.884 [2024-10-13 20:06:21.446594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.884 [2024-10-13 20:06:21.451583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.884 [2024-10-13 20:06:21.451626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.884 [2024-10-13 20:06:21.451666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.884 [2024-10-13 20:06:21.457575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.884 [2024-10-13 20:06:21.457617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.884 [2024-10-13 20:06:21.457644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.884 [2024-10-13 20:06:21.464167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.884 [2024-10-13 20:06:21.464209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.464235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.472433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.472479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.472507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.480353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.480423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.480451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.489241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.489287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.489314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.496166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.496211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.496237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.502892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.502937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.502978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.510591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.510638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.510665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.517029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.517073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.517117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.525746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.525791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.525818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.533557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.533601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.533628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.542388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.542442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.542470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.550008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.550051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.550077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.558123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.558169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.558196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.564616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.564663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.564714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.570891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.570936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.570962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.577530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.577576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.577616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.584472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.584516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.584553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.590486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.590529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.590564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.596596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.596641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.596668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.602465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.602509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.602535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.609121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.609164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.609189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.614335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.614377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.614413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.620122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.620165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.620190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.625949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.625991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.626017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.885 4774.00 IOPS, 596.75 MiB/s [2024-10-13T18:06:21.700Z] [2024-10-13 20:06:21.634389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.634441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.634467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.640271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.640316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.640342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.645341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.645408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.645437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.651350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.885 [2024-10-13 20:06:21.651416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.885 [2024-10-13 20:06:21.651444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.885 [2024-10-13 20:06:21.657424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.886 [2024-10-13 20:06:21.657479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.886 [2024-10-13 20:06:21.657505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.886 [2024-10-13 20:06:21.663488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.886 [2024-10-13 20:06:21.663533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.886 [2024-10-13 20:06:21.663559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.886 [2024-10-13 20:06:21.669539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.886 [2024-10-13 20:06:21.669583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.886 [2024-10-13 20:06:21.669610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.886 [2024-10-13 20:06:21.675510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.886 [2024-10-13 20:06:21.675561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.886 [2024-10-13 20:06:21.675593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:31.886 [2024-10-13 20:06:21.681498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.886 [2024-10-13 20:06:21.681541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.886 [2024-10-13 20:06:21.681579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:31.886 [2024-10-13 20:06:21.687510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.886 [2024-10-13 20:06:21.687563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.886 [2024-10-13 20:06:21.687589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.886 [2024-10-13 20:06:21.693380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.886 [2024-10-13 20:06:21.693443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.886 [2024-10-13 20:06:21.693471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.886 [2024-10-13 20:06:21.699174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.886 [2024-10-13 20:06:21.699216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.699242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.705047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.705091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.705118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.710953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.710993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.711018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.716999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.717041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.717067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.722972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.723014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.723040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.728840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.728882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.728907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.736516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.736561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.736588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.741593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.741637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.741662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.747432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.747476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.747503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.753204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.753246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.753271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.760184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.760227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.760269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.766403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.766447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.766474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.775983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.776032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.776059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.781539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.781583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.781611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.787930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.787973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.788013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.793794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.793852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.793878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.797646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.797688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.797713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.804238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.804281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.804306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.812955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.813016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.813055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.820589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.147 [2024-10-13 20:06:21.820634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.147 [2024-10-13 20:06:21.820678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.147 [2024-10-13 20:06:21.828826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.828887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.828912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.835874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.835918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.835945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.842357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.842468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.842502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.849009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.849056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.849098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.855021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.855064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.855090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.861030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.861074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.861116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.867119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.867159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.867183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.873217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.873258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.873283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.879327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.879386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.879422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.885998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.886040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.886064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.892078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.892123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.892166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.897963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.898005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.898044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.903905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.903947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.903974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.909871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.909912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.909936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.915764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.915807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.915832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.921716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.921758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.921783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.927775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.927816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.927841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.933690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.933747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.933773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.939829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.939870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.939894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.945816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.945860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.945885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.951779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.951822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.951846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.148 [2024-10-13 20:06:21.957875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.148 [2024-10-13 20:06:21.957918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.148 [2024-10-13 20:06:21.957945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:21.963525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:21.963569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:21.963593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:21.969641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:21.969697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:21.969725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:21.974566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:21.974614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:21.974641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:21.978527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:21.978569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:21.978595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:21.983349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:21.983391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:21.983426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:21.987022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:21.987063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:21.987088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:21.991939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:21.991981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:21.992020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:21.998238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:21.998281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:21.998304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.004628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.004686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.004727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.012223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.012265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.012289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.019401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.019457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.019483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.026048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.026091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.026116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.033377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.033446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.033474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.040706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.040764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.040789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.048410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.048468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.048495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.055626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.055681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.055708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.062600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.062655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.062681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.070060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.070101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.070126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.078435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.078488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.078513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.085924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.085967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.085992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.092533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.092577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.092603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.099570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.099613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.099639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.106524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.106568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.106596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.113153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.113196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.113232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.119051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.119093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.119118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.125122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.125182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.125207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.131087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.131131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.131173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.137082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.137124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.137160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.143034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.143075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.143100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.148883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.148922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.148947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.154733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.154775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.154800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.161066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.161108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.161133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.166962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.167029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.410 [2024-10-13 20:06:22.167055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.410 [2024-10-13 20:06:22.173120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.410 [2024-10-13 20:06:22.173164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.411 [2024-10-13 20:06:22.173189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.411 [2024-10-13 20:06:22.179118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.411 [2024-10-13 20:06:22.179161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.411 [2024-10-13 20:06:22.179185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.411 [2024-10-13 20:06:22.185478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.411 [2024-10-13 20:06:22.185522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.411 [2024-10-13 20:06:22.185548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.411 [2024-10-13 20:06:22.192553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.411 [2024-10-13 20:06:22.192597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.411 [2024-10-13 20:06:22.192622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.411 [2024-10-13 20:06:22.201146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.411 [2024-10-13 20:06:22.201206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.411 [2024-10-13 20:06:22.201252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.411 [2024-10-13 20:06:22.208458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.411 [2024-10-13 20:06:22.208502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.411 [2024-10-13 20:06:22.208529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.411 [2024-10-13 20:06:22.216107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.411 [2024-10-13 20:06:22.216151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.411 [2024-10-13 20:06:22.216178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.411 [2024-10-13 20:06:22.223202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.411 [2024-10-13 20:06:22.223245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.411 [2024-10-13 20:06:22.223281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.230738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.230782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.230808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.238007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.238050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.238075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.244658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.244716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.244756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.251957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.252003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.252029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.258956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.259001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.259027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.265927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.265969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.265996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.271789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.271832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.271858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.277674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.277717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.277758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.283761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.283816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.283843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.289790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.289847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.289878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.296658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.296701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.296742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.302977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.303027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.303057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.309176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.309223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.309252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.315735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.315782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.315811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.322690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.322753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.322782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.329134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.329181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.329210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.335563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.335606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.335630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.341923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.341971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.342021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.348076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.348122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.348152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.354511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.354551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.354575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.361380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.361452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.361479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.368122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.368169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.368198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.376768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.376817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.376846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.385340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.385389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.385446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.394389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.394462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.394489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.403328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.403387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.671 [2024-10-13 20:06:22.403443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.671 [2024-10-13 20:06:22.411305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.671 [2024-10-13 20:06:22.411353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.411383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.672 [2024-10-13 20:06:22.420141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.672 [2024-10-13 20:06:22.420192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.420222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.672 [2024-10-13 20:06:22.428979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.672 [2024-10-13 20:06:22.429029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.429059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.672 [2024-10-13 20:06:22.437819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.672 [2024-10-13 20:06:22.437868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.437899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.672 [2024-10-13 20:06:22.446914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.672 [2024-10-13 20:06:22.446964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.446994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.672 [2024-10-13 20:06:22.455679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.672 [2024-10-13 20:06:22.455743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.455774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.672 [2024-10-13 20:06:22.464458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.672 [2024-10-13 20:06:22.464518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.464544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.672 [2024-10-13 20:06:22.473005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.672 [2024-10-13 20:06:22.473054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.473084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.672 [2024-10-13 20:06:22.481747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.672 [2024-10-13 20:06:22.481796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.672 [2024-10-13 20:06:22.481826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.490547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.490594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.490620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.499230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.499278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.499309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.508058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.508106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.508136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.516855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.516905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.516934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.524073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.524121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.524150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.530873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.530923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.530954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.537918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.537966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.537995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.544745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.544803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.544833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.550948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.550996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.551024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.555959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.556008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.556037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.564045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.564095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.564125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.572774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.572823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.572852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.580232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.580280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.580309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.586784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.586833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.586863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.593905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.593955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.593985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.598605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.598646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.598672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.605765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.605814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.605844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.613088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.613138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.931 [2024-10-13 20:06:22.613168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:32.931 [2024-10-13 20:06:22.620740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.931 [2024-10-13 20:06:22.620789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.932 [2024-10-13 20:06:22.620819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:32.932 [2024-10-13 20:06:22.627961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.932 [2024-10-13 20:06:22.628009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.932 [2024-10-13 20:06:22.628038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:32.932 4697.00 IOPS, 587.12 MiB/s 00:36:32.932 Latency(us) 00:36:32.932 [2024-10-13T18:06:22.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.932 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:32.932 nvme0n1 : 2.00 4696.71 587.09 0.00 0.00 3400.37 1055.86 10340.12 00:36:32.932 [2024-10-13T18:06:22.747Z] =================================================================================================================== 00:36:32.932 [2024-10-13T18:06:22.747Z] Total : 4696.71 587.09 0.00 0.00 3400.37 1055.86 10340.12 00:36:32.932 { 00:36:32.932 "results": [ 00:36:32.932 { 00:36:32.932 "job": "nvme0n1", 00:36:32.932 "core_mask": "0x2", 00:36:32.932 "workload": "randread", 00:36:32.932 "status": "finished", 00:36:32.932 "queue_depth": 16, 00:36:32.932 "io_size": 131072, 00:36:32.932 "runtime": 2.003528, 00:36:32.932 "iops": 4696.714994749263, 00:36:32.932 "mibps": 587.0893743436578, 00:36:32.932 "io_failed": 0, 00:36:32.932 "io_timeout": 0, 00:36:32.932 "avg_latency_us": 3400.367100090526, 00:36:32.932 "min_latency_us": 1055.8577777777778, 00:36:32.932 "max_latency_us": 10340.124444444444 00:36:32.932 } 00:36:32.932 ], 00:36:32.932 "core_count": 1 00:36:32.932 } 00:36:32.932 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:32.932 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:32.932 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:32.932 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:32.932 | .driver_specific 00:36:32.932 | .nvme_error 00:36:32.932 | .status_code 00:36:32.932 | .command_transient_transport_error' 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 303 > 0 )) 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3151661 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3151661 ']' 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3151661 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3151661 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3151661' 00:36:33.191 killing process with pid 3151661 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3151661 00:36:33.191 Received shutdown signal, test time was about 2.000000 seconds 00:36:33.191 00:36:33.191 Latency(us) 00:36:33.191 [2024-10-13T18:06:23.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.191 [2024-10-13T18:06:23.006Z] =================================================================================================================== 00:36:33.191 [2024-10-13T18:06:23.006Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.191 20:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3151661 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3152206 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3152206 /var/tmp/bperf.sock 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3152206 ']' 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:34.127 20:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.385 [2024-10-13 20:06:23.944254] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:34.386 [2024-10-13 20:06:23.944439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152206 ] 00:36:34.386 [2024-10-13 20:06:24.077453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.644 [2024-10-13 20:06:24.212817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.212 20:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:35.212 20:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:35.212 20:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:35.212 20:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:35.471 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:35.471 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.471 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.471 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.471 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:35.471 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.037 nvme0n1 00:36:36.037 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:36.037 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.037 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:36.037 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.037 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:36.037 20:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.037 Running I/O for 2 seconds... 00:36:36.037 [2024-10-13 20:06:25.797326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:36:36.037 [2024-10-13 20:06:25.798886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.037 [2024-10-13 20:06:25.798951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:36.037 [2024-10-13 20:06:25.814326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:36:36.037 [2024-10-13 20:06:25.815361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.037 [2024-10-13 20:06:25.815422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:36.037 [2024-10-13 20:06:25.834488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:36:36.037 [2024-10-13 20:06:25.837007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.037 [2024-10-13 20:06:25.837064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:36.037 [2024-10-13 20:06:25.845762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:36:36.037 [2024-10-13 20:06:25.846833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.037 [2024-10-13 20:06:25.846877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.862026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:36:36.296 [2024-10-13 20:06:25.863294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.863339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.882062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:36:36.296 [2024-10-13 20:06:25.883984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.884030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.898749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:36:36.296 [2024-10-13 20:06:25.901052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.901096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.910575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:36:36.296 [2024-10-13 20:06:25.911661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.911726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.930081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:36:36.296 [2024-10-13 20:06:25.931807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.931852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.945732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:36:36.296 [2024-10-13 20:06:25.947741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.947782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.962087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:36:36.296 [2024-10-13 20:06:25.963563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.963603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.978346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:36:36.296 [2024-10-13 20:06:25.980261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.980305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:25.993525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8e88 00:36:36.296 [2024-10-13 20:06:25.995420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:25.995476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:26.010008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:36:36.296 [2024-10-13 20:06:26.011647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:26.011701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:26.030240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:36:36.296 [2024-10-13 20:06:26.032725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.296 [2024-10-13 20:06:26.032798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:36.296 [2024-10-13 20:06:26.042268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:36:36.296 [2024-10-13 20:06:26.043483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.297 [2024-10-13 20:06:26.043523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:36.297 [2024-10-13 20:06:26.063200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:36:36.297 [2024-10-13 20:06:26.065786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.297 [2024-10-13 20:06:26.065831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:36.297 [2024-10-13 20:06:26.075238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:36:36.297 [2024-10-13 20:06:26.076450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.297 [2024-10-13 20:06:26.076490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:36.297 [2024-10-13 20:06:26.092339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:36:36.297 [2024-10-13 20:06:26.093532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.297 [2024-10-13 20:06:26.093574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:36.297 [2024-10-13 20:06:26.110907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:36:36.555 [2024-10-13 20:06:26.113198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.113243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:36.555 [2024-10-13 20:06:26.125817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:36:36.555 [2024-10-13 20:06:26.127416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.127484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:36.555 [2024-10-13 20:06:26.140297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:36:36.555 [2024-10-13 20:06:26.142851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.142896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:36.555 [2024-10-13 20:06:26.156415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:36:36.555 [2024-10-13 20:06:26.158379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.158449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:36.555 [2024-10-13 20:06:26.172248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:36:36.555 [2024-10-13 20:06:26.173668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.173733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:36.555 [2024-10-13 20:06:26.187323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:36:36.555 [2024-10-13 20:06:26.188828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.188881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:36.555 [2024-10-13 20:06:26.204050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:36:36.555 [2024-10-13 20:06:26.204956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.205000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:36.555 [2024-10-13 20:06:26.222279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:36:36.555 [2024-10-13 20:06:26.224267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.224321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:36.555 [2024-10-13 20:06:26.237016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bddc00 00:36:36.555 [2024-10-13 20:06:26.238426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.555 [2024-10-13 20:06:26.238481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:36.556 [2024-10-13 20:06:26.254564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:36:36.556 [2024-10-13 20:06:26.256866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.556 [2024-10-13 20:06:26.256911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:36.556 [2024-10-13 20:06:26.270612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:36:36.556 [2024-10-13 20:06:26.272814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.556 [2024-10-13 20:06:26.272870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:36.556 [2024-10-13 20:06:26.283775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:36:36.556 [2024-10-13 20:06:26.284839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.556 [2024-10-13 20:06:26.284884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:36.556 [2024-10-13 20:06:26.300174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:36:36.556 [2024-10-13 20:06:26.301509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.556 [2024-10-13 20:06:26.301548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:36.556 [2024-10-13 20:06:26.315922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:36:36.556 [2024-10-13 20:06:26.317589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.556 [2024-10-13 20:06:26.317629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:36.556 [2024-10-13 20:06:26.332618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:36:36.556 [2024-10-13 20:06:26.334440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.556 [2024-10-13 20:06:26.334479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.556 [2024-10-13 20:06:26.348709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:36:36.556 [2024-10-13 20:06:26.350578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.556 [2024-10-13 20:06:26.350618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.556 [2024-10-13 20:06:26.365534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:36:36.556 [2024-10-13 20:06:26.367771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.556 [2024-10-13 20:06:26.367825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.377484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8e88 00:36:36.814 [2024-10-13 20:06:26.378414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.814 [2024-10-13 20:06:26.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.393485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:36:36.814 [2024-10-13 20:06:26.394459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.814 [2024-10-13 20:06:26.394500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.412821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:36:36.814 [2024-10-13 20:06:26.414829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.814 [2024-10-13 20:06:26.414874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.428637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:36:36.814 [2024-10-13 20:06:26.430660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.814 [2024-10-13 20:06:26.430715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.443371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:36:36.814 [2024-10-13 20:06:26.444803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.814 [2024-10-13 20:06:26.444847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.459448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:36:36.814 [2024-10-13 20:06:26.460747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.814 [2024-10-13 20:06:26.460792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.477558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:36:36.814 [2024-10-13 20:06:26.479977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.814 [2024-10-13 20:06:26.480030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.488548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:36:36.814 [2024-10-13 20:06:26.489556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.814 [2024-10-13 20:06:26.489596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:36.814 [2024-10-13 20:06:26.503501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:36:36.815 [2024-10-13 20:06:26.504473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.815 [2024-10-13 20:06:26.504538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:36.815 [2024-10-13 20:06:26.520974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:36:36.815 [2024-10-13 20:06:26.522185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.815 [2024-10-13 20:06:26.522230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:36.815 [2024-10-13 20:06:26.537055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:36:36.815 [2024-10-13 20:06:26.538170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.815 [2024-10-13 20:06:26.538222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:36.815 [2024-10-13 20:06:26.551844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:36:36.815 [2024-10-13 20:06:26.552873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.815 [2024-10-13 20:06:26.552917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:36.815 [2024-10-13 20:06:26.568308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:36:36.815 [2024-10-13 20:06:26.569752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.815 [2024-10-13 20:06:26.569804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:36.815 [2024-10-13 20:06:26.585866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:36:36.815 [2024-10-13 20:06:26.587510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.815 [2024-10-13 20:06:26.587550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:36.815 [2024-10-13 20:06:26.601617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:36.815 [2024-10-13 20:06:26.603269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.815 [2024-10-13 20:06:26.603313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:36.815 [2024-10-13 20:06:26.618491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:36:36.815 [2024-10-13 20:06:26.620569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:36.815 [2024-10-13 20:06:26.620609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.634081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:36:37.075 [2024-10-13 20:06:26.636154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.636200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.651300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:37.075 [2024-10-13 20:06:26.653064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.653108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.667921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:36:37.075 [2024-10-13 20:06:26.669726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.669771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.687575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:36:37.075 [2024-10-13 20:06:26.690241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.690285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.699605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:36:37.075 [2024-10-13 20:06:26.701009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.701063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.720281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:36:37.075 [2024-10-13 20:06:26.722466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.722507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.736851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:36:37.075 [2024-10-13 20:06:26.739104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.739157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.751929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:36:37.075 [2024-10-13 20:06:26.753576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.753616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:37.075 [2024-10-13 20:06:26.768287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:36:37.075 [2024-10-13 20:06:26.769758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.075 [2024-10-13 20:06:26.769801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.075 15715.00 IOPS, 61.39 MiB/s [2024-10-13T18:06:26.890Z] [2024-10-13 20:06:26.784904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:36:37.076 [2024-10-13 20:06:26.786722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.076 [2024-10-13 20:06:26.786767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:37.076 [2024-10-13 20:06:26.802686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:36:37.076 [2024-10-13 20:06:26.805252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.076 [2024-10-13 20:06:26.805297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:37.076 [2024-10-13 20:06:26.813993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:36:37.076 [2024-10-13 20:06:26.815124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.076 [2024-10-13 20:06:26.815168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:37.076 [2024-10-13 20:06:26.830352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:36:37.076 [2024-10-13 20:06:26.831367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.076 [2024-10-13 20:06:26.831437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:37.076 [2024-10-13 20:06:26.845604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:36:37.076 [2024-10-13 20:06:26.846907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.076 [2024-10-13 20:06:26.846952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:37.076 [2024-10-13 20:06:26.861527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:36:37.076 [2024-10-13 20:06:26.862802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.076 [2024-10-13 20:06:26.862846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:37.076 [2024-10-13 20:06:26.881446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:36:37.076 [2024-10-13 20:06:26.883856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.076 [2024-10-13 20:06:26.883901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:37.336 [2024-10-13 20:06:26.893615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:36:37.336 [2024-10-13 20:06:26.894785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.336 [2024-10-13 20:06:26.894831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:37.336 [2024-10-13 20:06:26.909609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:36:37.336 [2024-10-13 20:06:26.910748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.336 [2024-10-13 20:06:26.910793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:37.336 [2024-10-13 20:06:26.929050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:36:37.336 [2024-10-13 20:06:26.931049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.336 [2024-10-13 20:06:26.931095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.336 [2024-10-13 20:06:26.940826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:36:37.336 [2024-10-13 20:06:26.941957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.336 [2024-10-13 20:06:26.942001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:37.336 [2024-10-13 20:06:26.956813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:36:37.337 [2024-10-13 20:06:26.957966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:26.958010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:26.975805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bddc00 00:36:37.337 [2024-10-13 20:06:26.977589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:26.977630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:26.990504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:36:37.337 [2024-10-13 20:06:26.993083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:26.993128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.006459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:36:37.337 [2024-10-13 20:06:27.008537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.008578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.021986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:36:37.337 [2024-10-13 20:06:27.023574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.023614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.036886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:36:37.337 [2024-10-13 20:06:27.038446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.038487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.054388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:36:37.337 [2024-10-13 20:06:27.056234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.056294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.070207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:36:37.337 [2024-10-13 20:06:27.072001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.072045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.086993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:36:37.337 [2024-10-13 20:06:27.089156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.089202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.101765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:36:37.337 [2024-10-13 20:06:27.103348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.103401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.117752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde038 00:36:37.337 [2024-10-13 20:06:27.119239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.119284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.135885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:36:37.337 [2024-10-13 20:06:27.138561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.138601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.337 [2024-10-13 20:06:27.147021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaef0 00:36:37.337 [2024-10-13 20:06:27.148202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.337 [2024-10-13 20:06:27.148246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.162829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:36:37.597 [2024-10-13 20:06:27.164176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.164221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.182269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:36:37.597 [2024-10-13 20:06:27.184251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.184295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.197956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:36:37.597 [2024-10-13 20:06:27.200140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.200185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.214861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:36:37.597 [2024-10-13 20:06:27.217259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.217304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.229605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:36:37.597 [2024-10-13 20:06:27.231430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.231499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.244074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:36:37.597 [2024-10-13 20:06:27.246773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.246817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.258684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:36:37.597 [2024-10-13 20:06:27.259847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.259890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.273417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:36:37.597 [2024-10-13 20:06:27.274617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.274658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.291159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf31b8 00:36:37.597 [2024-10-13 20:06:27.292585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.292625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.307551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:36:37.597 [2024-10-13 20:06:27.309126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.309170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.323862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:36:37.597 [2024-10-13 20:06:27.325386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.597 [2024-10-13 20:06:27.325463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.597 [2024-10-13 20:06:27.342107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0bc0 00:36:37.597 [2024-10-13 20:06:27.344819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.598 [2024-10-13 20:06:27.344864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.598 [2024-10-13 20:06:27.353922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:36:37.598 [2024-10-13 20:06:27.355276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.598 [2024-10-13 20:06:27.355321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.598 [2024-10-13 20:06:27.371547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:36:37.598 [2024-10-13 20:06:27.373167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.598 [2024-10-13 20:06:27.373213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:37.598 [2024-10-13 20:06:27.389303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:36:37.598 [2024-10-13 20:06:27.391757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.598 [2024-10-13 20:06:27.391798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:37.598 [2024-10-13 20:06:27.401072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:36:37.598 [2024-10-13 20:06:27.402237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.598 [2024-10-13 20:06:27.402281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.420701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:36:37.858 [2024-10-13 20:06:27.422634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.422675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.435401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8e88 00:36:37.858 [2024-10-13 20:06:27.438067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.438111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.450010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:36:37.858 [2024-10-13 20:06:27.451204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.451249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.466679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6cc8 00:36:37.858 [2024-10-13 20:06:27.468260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.468305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.486306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:36:37.858 [2024-10-13 20:06:27.488897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.488941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.497389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:36:37.858 [2024-10-13 20:06:27.498587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.498634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.512476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:36:37.858 [2024-10-13 20:06:27.513668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.513721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.530102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9f68 00:36:37.858 [2024-10-13 20:06:27.531510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.531567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:37.858 [2024-10-13 20:06:27.546341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee5c8 00:36:37.858 [2024-10-13 20:06:27.547917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.858 [2024-10-13 20:06:27.547961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:37.859 [2024-10-13 20:06:27.562625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:36:37.859 [2024-10-13 20:06:27.564131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.859 [2024-10-13 20:06:27.564176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.859 [2024-10-13 20:06:27.578742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bddc00 00:36:37.859 [2024-10-13 20:06:27.580575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.859 [2024-10-13 20:06:27.580615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.859 [2024-10-13 20:06:27.596642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:36:37.859 [2024-10-13 20:06:27.599267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.859 [2024-10-13 20:06:27.599311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.859 [2024-10-13 20:06:27.607872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:36:37.859 [2024-10-13 20:06:27.609021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.859 [2024-10-13 20:06:27.609065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:37.859 [2024-10-13 20:06:27.622913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be99d8 00:36:37.859 [2024-10-13 20:06:27.624034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.859 [2024-10-13 20:06:27.624079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:37.859 [2024-10-13 20:06:27.640431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8e88 00:36:37.859 [2024-10-13 20:06:27.641786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.859 [2024-10-13 20:06:27.641831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:37.859 [2024-10-13 20:06:27.655089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:36:37.859 [2024-10-13 20:06:27.656444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.859 [2024-10-13 20:06:27.656484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:37.859 [2024-10-13 20:06:27.670979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:36:37.859 [2024-10-13 20:06:27.672320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:37.859 [2024-10-13 20:06:27.672365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:38.118 [2024-10-13 20:06:27.687160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:36:38.118 [2024-10-13 20:06:27.688568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.118 [2024-10-13 20:06:27.688609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:38.118 [2024-10-13 20:06:27.705189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde038 00:36:38.118 [2024-10-13 20:06:27.707368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.118 [2024-10-13 20:06:27.707438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:38.118 [2024-10-13 20:06:27.719853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:36:38.118 [2024-10-13 20:06:27.721410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.118 [2024-10-13 20:06:27.721471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:38.118 [2024-10-13 20:06:27.734782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:36:38.118 [2024-10-13 20:06:27.736707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.118 [2024-10-13 20:06:27.736762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:38.118 [2024-10-13 20:06:27.751122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:36:38.118 [2024-10-13 20:06:27.752554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.118 [2024-10-13 20:06:27.752594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:38.118 [2024-10-13 20:06:27.766373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:36:38.118 [2024-10-13 20:06:27.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.118 [2024-10-13 20:06:27.767776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.118 [2024-10-13 20:06:27.784110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed920 00:36:38.118 15929.00 IOPS, 62.22 MiB/s [2024-10-13T18:06:27.933Z] [2024-10-13 20:06:27.785844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.118 [2024-10-13 20:06:27.785888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:38.118 00:36:38.118 Latency(us) 00:36:38.118 [2024-10-13T18:06:27.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.118 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:38.118 nvme0n1 : 2.01 15921.00 62.19 0.00 0.00 8023.22 3568.07 21262.79 00:36:38.118 [2024-10-13T18:06:27.933Z] =================================================================================================================== 00:36:38.118 [2024-10-13T18:06:27.933Z] Total : 15921.00 62.19 0.00 0.00 8023.22 3568.07 21262.79 00:36:38.118 { 00:36:38.118 "results": [ 00:36:38.118 { 00:36:38.118 "job": "nvme0n1", 00:36:38.118 "core_mask": "0x2", 00:36:38.118 "workload": "randwrite", 00:36:38.118 "status": "finished", 00:36:38.118 "queue_depth": 128, 00:36:38.118 "io_size": 4096, 00:36:38.118 "runtime": 2.005087, 00:36:38.118 "iops": 15921.004923975868, 00:36:38.118 "mibps": 62.191425484280735, 00:36:38.118 "io_failed": 0, 00:36:38.118 "io_timeout": 0, 00:36:38.118 "avg_latency_us": 8023.224770530014, 00:36:38.118 "min_latency_us": 3568.071111111111, 00:36:38.118 "max_latency_us": 21262.79111111111 00:36:38.118 } 00:36:38.118 ], 00:36:38.118 "core_count": 1 00:36:38.118 } 00:36:38.118 20:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:38.118 20:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:38.118 20:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:38.118 20:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:38.118 | .driver_specific 00:36:38.118 | .nvme_error 00:36:38.118 | .status_code 00:36:38.118 | .command_transient_transport_error' 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 125 > 0 )) 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3152206 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3152206 ']' 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3152206 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3152206 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3152206' 00:36:38.383 killing process with pid 3152206 00:36:38.383 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3152206 00:36:38.383 Received shutdown signal, test time was about 2.000000 seconds 00:36:38.383 00:36:38.383 Latency(us) 00:36:38.383 [2024-10-13T18:06:28.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.383 [2024-10-13T18:06:28.198Z] =================================================================================================================== 00:36:38.383 [2024-10-13T18:06:28.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:38.384 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3152206 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3152846 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3152846 /var/tmp/bperf.sock 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3152846 ']' 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:39.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:39.322 20:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:39.322 [2024-10-13 20:06:29.078293] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:39.322 [2024-10-13 20:06:29.078451] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152846 ] 00:36:39.322 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:39.322 Zero copy mechanism will not be used. 00:36:39.582 [2024-10-13 20:06:29.211877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.582 [2024-10-13 20:06:29.347585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.520 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:40.520 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:40.520 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:40.520 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:40.778 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:40.778 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.778 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.778 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.778 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:40.778 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.036 nvme0n1 00:36:41.036 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:41.036 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.036 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.036 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.036 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:41.036 20:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:41.296 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:41.296 Zero copy mechanism will not be used. 00:36:41.296 Running I/O for 2 seconds... 00:36:41.296 [2024-10-13 20:06:30.869037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.869539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.869588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.876194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.876724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.876789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.883347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.883828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.883874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.890733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.891171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.891224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.898209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.898607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.898653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.905743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.906206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.906250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.913176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.913582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.913622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.920509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.920978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.921035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.927806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.928194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.928246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.936019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.936469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.936523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.944179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.944612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.944666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.951671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.952118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.952162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.959367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.959839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.959893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.966839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.967290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.967344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.974904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.975338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.975420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.982600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.983027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.983072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.989994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.990439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.990478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:30.997163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.296 [2024-10-13 20:06:30.997585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.296 [2024-10-13 20:06:30.997625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.296 [2024-10-13 20:06:31.004246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.004678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.004717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.011279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.011914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.011959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.019095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.019531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.019571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.026892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.027076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.027120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.034148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.034531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.034582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.041292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.041660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.041736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.048019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.048368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.048422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.054600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.055000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.055051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.061365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.061729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.061774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.068098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.068508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.068550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.074836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.075187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.075232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.081453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.081912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.081956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.088289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.088632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.088672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.094966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.095318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.095362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.101759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.102142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.102191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.297 [2024-10-13 20:06:31.108475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.297 [2024-10-13 20:06:31.108814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.297 [2024-10-13 20:06:31.108859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.114693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.115035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.115078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.120916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.121242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.121281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.127261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.127587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.127630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.133679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.134042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.134084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.140040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.140416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.140458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.146507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.146857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.146898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.152899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.153226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.153294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.159341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.159692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.159733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.165727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.166052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.166092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.172085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.172442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.172484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.178511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.178828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.178869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.184646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.184956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.184997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.191004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.191334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.191374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.197447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.197774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.197814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.204275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.204661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.204718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.211852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.212186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.212226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.219425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.219795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.219836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.227221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.227543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.227584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.234908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.235237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.235277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.242541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.242873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.242913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.250247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.250721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.250761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.258266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.258590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.258631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.265936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.266261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.266302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.273667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.274067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.559 [2024-10-13 20:06:31.274118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.559 [2024-10-13 20:06:31.281094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.559 [2024-10-13 20:06:31.281420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.281462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.287788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.288167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.288212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.294649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.295031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.295076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.301286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.301633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.301674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.307695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.308017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.308056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.313856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.314163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.314203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.320233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.320570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.320610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.326585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.326895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.326951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.332984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.333313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.333354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.339276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.339600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.339641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.345326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.345653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.345707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.351779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.352088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.352129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.358061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.358375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.358424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.364379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.364708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.364749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.560 [2024-10-13 20:06:31.371466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.560 [2024-10-13 20:06:31.371642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.560 [2024-10-13 20:06:31.371681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.821 [2024-10-13 20:06:31.378348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.821 [2024-10-13 20:06:31.378670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.821 [2024-10-13 20:06:31.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.821 [2024-10-13 20:06:31.386144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.821 [2024-10-13 20:06:31.386491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.821 [2024-10-13 20:06:31.386543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.821 [2024-10-13 20:06:31.393511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.821 [2024-10-13 20:06:31.393857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.821 [2024-10-13 20:06:31.393898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.821 [2024-10-13 20:06:31.401172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.401553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.401595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.408980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.409425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.409466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.416889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.417270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.417325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.424604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.424933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.424973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.432094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.432464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.432504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.439716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.440073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.440113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.446535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.446839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.446881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.452613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.452974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.453014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.458886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.459246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.459285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.465314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.465629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.465671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.471629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.471953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.471994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.477868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.478175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.478231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.484231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.484564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.484605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.490554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.490870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.490910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.496843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.497167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.497207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.503225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.503556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.503597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.509526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.509832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.509873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.515474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.515817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.515857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.521801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.522107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.522148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.528083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.528415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.528455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.534388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.534721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.534761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.540666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.540974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.541014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.546662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.546991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.547031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.553121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.553453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.553509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.559659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.559989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.560030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.566153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.566485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.566526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.572561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.572871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.572912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.578921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.822 [2024-10-13 20:06:31.579229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.822 [2024-10-13 20:06:31.579284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.822 [2024-10-13 20:06:31.585261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.585592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.585632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.823 [2024-10-13 20:06:31.591741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.592050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.592090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.823 [2024-10-13 20:06:31.597837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.598176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.598216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.823 [2024-10-13 20:06:31.603472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.603816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.603856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.823 [2024-10-13 20:06:31.610755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.611054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.611095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.823 [2024-10-13 20:06:31.616634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.616878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.616920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.823 [2024-10-13 20:06:31.622211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.622449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.622488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.823 [2024-10-13 20:06:31.627635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.627862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.627903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.823 [2024-10-13 20:06:31.633464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:41.823 [2024-10-13 20:06:31.633727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.823 [2024-10-13 20:06:31.633768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.639925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.640155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.084 [2024-10-13 20:06:31.640196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.645890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.646122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.084 [2024-10-13 20:06:31.646163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.651448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.651675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.084 [2024-10-13 20:06:31.651715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.656844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.657051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.084 [2024-10-13 20:06:31.657093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.662320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.662536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.084 [2024-10-13 20:06:31.662574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.667759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.667970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.084 [2024-10-13 20:06:31.668008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.673625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.673859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.084 [2024-10-13 20:06:31.673909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.679779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.680011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.084 [2024-10-13 20:06:31.680055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.084 [2024-10-13 20:06:31.685857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.084 [2024-10-13 20:06:31.686090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.686132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.691947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.692183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.692228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.698034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.698269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.698315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.704155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.704386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.704453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.710446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.710658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.710716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.716688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.716931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.716977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.722769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.723005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.723050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.729025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.729251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.729295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.735266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.735507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.735553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.742035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.742333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.742377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.748547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.748772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.748816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.754650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.754897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.754960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.760930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.761179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.761222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.767129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.767475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.767521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.773259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.773583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.773623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.779457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.779673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.779718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.785759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.786010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.786055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.792821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.793072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.793117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.799800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.800102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.800148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.807087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.807348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.807401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.814386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.814755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.814803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.820821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.821091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.821153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.827569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.827902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.827947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.834500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.834759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.834803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.841288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.841542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.841582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.848245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.848514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.848554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.855189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.855457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.855499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.085 4595.00 IOPS, 574.38 MiB/s [2024-10-13T18:06:31.900Z] [2024-10-13 20:06:31.863515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.863766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.085 [2024-10-13 20:06:31.863813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.085 [2024-10-13 20:06:31.869593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.085 [2024-10-13 20:06:31.869851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.086 [2024-10-13 20:06:31.869893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.086 [2024-10-13 20:06:31.875904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.086 [2024-10-13 20:06:31.876189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.086 [2024-10-13 20:06:31.876232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.086 [2024-10-13 20:06:31.882202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.086 [2024-10-13 20:06:31.882485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.086 [2024-10-13 20:06:31.882537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.086 [2024-10-13 20:06:31.888345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.086 [2024-10-13 20:06:31.888611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.086 [2024-10-13 20:06:31.888652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.086 [2024-10-13 20:06:31.894727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.086 [2024-10-13 20:06:31.895092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.086 [2024-10-13 20:06:31.895137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.901161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.901459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.901499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.907513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.907758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.907804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.913833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.914128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.914173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.920066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.920372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.920442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.926286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.926603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.926645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.932518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.932770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.932815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.938706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.939008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.939054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.945040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.945284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.945330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.951293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.951572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.951614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.957290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.957613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.957655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.964064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.964386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.964454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.970682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.970962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.971007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.977011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.977292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.977340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.983324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.983581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.983623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.989620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.989880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.989933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:31.995772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:31.996014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:31.996060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:32.002095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:32.002368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:32.002423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:32.008259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:32.008527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:32.008569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:32.014419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:32.014662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:32.014719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:32.020824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:32.021090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:32.021135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:32.026992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:32.027345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:32.027390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.347 [2024-10-13 20:06:32.033287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.347 [2024-10-13 20:06:32.033555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.347 [2024-10-13 20:06:32.033596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.039533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.039858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.039904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.045752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.046084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.046133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.051920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.052221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.052266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.058072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.058344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.058389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.064383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.064676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.064736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.070579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.070880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.070925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.076839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.077091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.077136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.083158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.083471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.083518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.089565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.089843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.089891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.095833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.096132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.096193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.102037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.102294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.102340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.108303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.108568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.108609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.114577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.114826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.114870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.120814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.121063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.121112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.127055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.127309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.127351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.133363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.133646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.133686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.139543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.139798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.139844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.145919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.146200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.146246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.152240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.152541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.152582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.348 [2024-10-13 20:06:32.158851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.348 [2024-10-13 20:06:32.159108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.348 [2024-10-13 20:06:32.159158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.165024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.165276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.165321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.171378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.171686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.171746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.177705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.178010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.178054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.183971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.184258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.184303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.190346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.190612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.190653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.196726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.196991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.197036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.202859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.203167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.203219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.209076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.209409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.209455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.215329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.215625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.215666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.221460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.221765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.221810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.227612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.227875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.227937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.233674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.233946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.233992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.239955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.240203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.240248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.246051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.246299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.246343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.252303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.252569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.252620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.258492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.258745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.258790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.264741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.264997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.265041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.270914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.271214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.271264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.277355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.277671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.277713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.283448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.283674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.609 [2024-10-13 20:06:32.283731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.609 [2024-10-13 20:06:32.289786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.609 [2024-10-13 20:06:32.290036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.290078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.295756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.296013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.296059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.302027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.302365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.302418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.308222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.308542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.308583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.314410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.314660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.314721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.320547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.320805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.320846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.326147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.326381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.326431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.331832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.332066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.332107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.338598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.338962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.339003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.344337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.344584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.344624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.350212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.350459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.350499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.356210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.356459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.356500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.362206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.362462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.362502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.367947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.368176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.368216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.373795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.374108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.374152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.379644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.379879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.379920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.385623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.385862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.385902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.391239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.391508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.391548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.396867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.397147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.397189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.402450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.402719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.402759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.408060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.408307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.408347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.413694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.413924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.413964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.610 [2024-10-13 20:06:32.419281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.610 [2024-10-13 20:06:32.419529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.610 [2024-10-13 20:06:32.419569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.424849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.425097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.425137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.430447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.430692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.430732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.436129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.436355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.436410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.441683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.441946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.441991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.447285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.447544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.447585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.452843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.453129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.453170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.458326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.458584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.458635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.463881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.464111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.464151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.469626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.469906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.469946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.475337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.871 [2024-10-13 20:06:32.475623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.871 [2024-10-13 20:06:32.475665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.871 [2024-10-13 20:06:32.480962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.481213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.481254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.486659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.486892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.486933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.492551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.492818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.492859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.498295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.498565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.498607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.504665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.504939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.504981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.510905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.511254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.511296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.517953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.518232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.518274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.523934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.524212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.524254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.529990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.530214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.530256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.536140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.536365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.536421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.542093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.542376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.542433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.547775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.547999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.548040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.553777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.554016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.554058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.560002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.560250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.560303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.565765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.566080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.566121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.571355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.571608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.571650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.577199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.577438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.577488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.582853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.583086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.583129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.588542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.588811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.588852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.594179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.594413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.594453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.599964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.600208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.600250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.605745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.605971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.606027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.611541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.611822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.611864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.617332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.617564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.617606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.623025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.623285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.623327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.628745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.629004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.872 [2024-10-13 20:06:32.629046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.872 [2024-10-13 20:06:32.634540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.872 [2024-10-13 20:06:32.634851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.634893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.873 [2024-10-13 20:06:32.640418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.873 [2024-10-13 20:06:32.640695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.640737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.873 [2024-10-13 20:06:32.646206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.873 [2024-10-13 20:06:32.646447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.646489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.873 [2024-10-13 20:06:32.652109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.873 [2024-10-13 20:06:32.652419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.652462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.873 [2024-10-13 20:06:32.657882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.873 [2024-10-13 20:06:32.658125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.658178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.873 [2024-10-13 20:06:32.663716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.873 [2024-10-13 20:06:32.663962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.664004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.873 [2024-10-13 20:06:32.669471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.873 [2024-10-13 20:06:32.669701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.669742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.873 [2024-10-13 20:06:32.675180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.873 [2024-10-13 20:06:32.675425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.675465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.873 [2024-10-13 20:06:32.681039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:42.873 [2024-10-13 20:06:32.681265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.873 [2024-10-13 20:06:32.681305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.686884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.687123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.687165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.692623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.692846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.692891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.698348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.698599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.698641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.704329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.704573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.704614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.710171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.710422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.710463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.715751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.716002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.716043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.721424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.721656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.721697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.727189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.727450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.727489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.733032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.733274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.733315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.738812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.739152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.739192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.744510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.744740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.133 [2024-10-13 20:06:32.744781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.133 [2024-10-13 20:06:32.750242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.133 [2024-10-13 20:06:32.750476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.750516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.755944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.756195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.756246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.761755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.761984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.762025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.767500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.767732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.767773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.773214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.773455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.773506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.778905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.779133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.779178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.784581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.784876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.790340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.790583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.790623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.796164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.796440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.796480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.801984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.802272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.802313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.807777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.808114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.808155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.813593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.813821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.813867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.819732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.819973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.820014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.825596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.825823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.825864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.831480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.831714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.831755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.837256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.837599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.837640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.843221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.843508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.843550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.848962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.849230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.849272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.134 [2024-10-13 20:06:32.854859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.855093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.855133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.134 4879.00 IOPS, 609.88 MiB/s [2024-10-13T18:06:32.949Z] [2024-10-13 20:06:32.861718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:36:43.134 [2024-10-13 20:06:32.861902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.134 [2024-10-13 20:06:32.861940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.134 00:36:43.134 Latency(us) 00:36:43.134 [2024-10-13T18:06:32.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.134 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:43.134 nvme0n1 : 2.00 4877.04 609.63 0.00 0.00 3270.36 2548.62 11262.48 00:36:43.134 [2024-10-13T18:06:32.949Z] =================================================================================================================== 00:36:43.134 [2024-10-13T18:06:32.949Z] Total : 4877.04 609.63 0.00 0.00 3270.36 2548.62 11262.48 00:36:43.134 { 00:36:43.134 "results": [ 00:36:43.134 { 00:36:43.134 "job": "nvme0n1", 00:36:43.134 "core_mask": "0x2", 00:36:43.134 "workload": "randwrite", 00:36:43.134 "status": "finished", 00:36:43.134 "queue_depth": 16, 00:36:43.134 "io_size": 131072, 00:36:43.134 "runtime": 2.004086, 00:36:43.134 "iops": 4877.036215012729, 00:36:43.134 "mibps": 609.6295268765912, 00:36:43.134 "io_failed": 0, 00:36:43.134 "io_timeout": 0, 00:36:43.134 "avg_latency_us": 3270.356860908381, 00:36:43.134 "min_latency_us": 2548.6222222222223, 00:36:43.134 "max_latency_us": 11262.482962962962 00:36:43.134 } 00:36:43.134 ], 00:36:43.134 "core_count": 1 00:36:43.134 } 00:36:43.134 20:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:43.134 20:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:43.134 20:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:43.134 | .driver_specific 00:36:43.134 | .nvme_error 00:36:43.134 | .status_code 00:36:43.134 | .command_transient_transport_error' 00:36:43.134 20:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 315 > 0 )) 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3152846 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3152846 ']' 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3152846 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3152846 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3152846' 00:36:43.394 killing process with pid 3152846 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3152846 00:36:43.394 Received shutdown signal, test time was about 2.000000 seconds 00:36:43.394 00:36:43.394 Latency(us) 00:36:43.394 [2024-10-13T18:06:33.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.394 [2024-10-13T18:06:33.209Z] =================================================================================================================== 00:36:43.394 [2024-10-13T18:06:33.209Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:43.394 20:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3152846 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3150838 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3150838 ']' 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3150838 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3150838 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3150838' 00:36:44.336 killing process with pid 3150838 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3150838 00:36:44.336 20:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3150838 00:36:45.713 00:36:45.713 real 0m23.186s 00:36:45.713 user 0m45.299s 00:36:45.713 sys 0m4.730s 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.713 ************************************ 00:36:45.713 END TEST nvmf_digest_error 00:36:45.713 ************************************ 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:45.713 rmmod nvme_tcp 00:36:45.713 rmmod nvme_fabrics 00:36:45.713 rmmod nvme_keyring 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 3150838 ']' 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 3150838 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3150838 ']' 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3150838 00:36:45.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3150838) - No such process 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3150838 is not found' 00:36:45.713 Process with pid 3150838 is not found 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:45.713 20:06:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:47.622 00:36:47.622 real 0m52.827s 00:36:47.622 user 1m35.720s 00:36:47.622 sys 0m11.007s 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:47.622 ************************************ 00:36:47.622 END TEST nvmf_digest 00:36:47.622 ************************************ 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:47.622 20:06:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.882 ************************************ 00:36:47.882 START TEST nvmf_bdevperf 00:36:47.882 ************************************ 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:47.882 * Looking for test storage... 00:36:47.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:47.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.882 --rc genhtml_branch_coverage=1 00:36:47.882 --rc genhtml_function_coverage=1 00:36:47.882 --rc genhtml_legend=1 00:36:47.882 --rc geninfo_all_blocks=1 00:36:47.882 --rc geninfo_unexecuted_blocks=1 00:36:47.882 00:36:47.882 ' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:47.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.882 --rc genhtml_branch_coverage=1 00:36:47.882 --rc genhtml_function_coverage=1 00:36:47.882 --rc genhtml_legend=1 00:36:47.882 --rc geninfo_all_blocks=1 00:36:47.882 --rc geninfo_unexecuted_blocks=1 00:36:47.882 00:36:47.882 ' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:47.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.882 --rc genhtml_branch_coverage=1 00:36:47.882 --rc genhtml_function_coverage=1 00:36:47.882 --rc genhtml_legend=1 00:36:47.882 --rc geninfo_all_blocks=1 00:36:47.882 --rc geninfo_unexecuted_blocks=1 00:36:47.882 00:36:47.882 ' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:47.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.882 --rc genhtml_branch_coverage=1 00:36:47.882 --rc genhtml_function_coverage=1 00:36:47.882 --rc genhtml_legend=1 00:36:47.882 --rc geninfo_all_blocks=1 00:36:47.882 --rc geninfo_unexecuted_blocks=1 00:36:47.882 00:36:47.882 ' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:47.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:47.882 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:47.883 20:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:49.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:49.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:49.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:49.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:49.808 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:50.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:50.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:36:50.067 00:36:50.067 --- 10.0.0.2 ping statistics --- 00:36:50.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.067 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:50.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:50.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:36:50.067 00:36:50.067 --- 10.0.0.1 ping statistics --- 00:36:50.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.067 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3155496 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3155496 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3155496 ']' 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:50.067 20:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:50.067 [2024-10-13 20:06:39.773428] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:50.067 [2024-10-13 20:06:39.773575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.326 [2024-10-13 20:06:39.909938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:50.326 [2024-10-13 20:06:40.043883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.326 [2024-10-13 20:06:40.043979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.326 [2024-10-13 20:06:40.044005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.326 [2024-10-13 20:06:40.044031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.326 [2024-10-13 20:06:40.044051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.326 [2024-10-13 20:06:40.046628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.326 [2024-10-13 20:06:40.046747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.326 [2024-10-13 20:06:40.046754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:51.263 [2024-10-13 20:06:40.776119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:51.263 Malloc0 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:51.263 [2024-10-13 20:06:40.888159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:51.263 { 00:36:51.263 "params": { 00:36:51.263 "name": "Nvme$subsystem", 00:36:51.263 "trtype": "$TEST_TRANSPORT", 00:36:51.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:51.263 "adrfam": "ipv4", 00:36:51.263 "trsvcid": "$NVMF_PORT", 00:36:51.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:51.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:51.263 "hdgst": ${hdgst:-false}, 00:36:51.263 "ddgst": ${ddgst:-false} 00:36:51.263 }, 00:36:51.263 "method": "bdev_nvme_attach_controller" 00:36:51.263 } 00:36:51.263 EOF 00:36:51.263 )") 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:36:51.263 20:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:51.263 "params": { 00:36:51.263 "name": "Nvme1", 00:36:51.263 "trtype": "tcp", 00:36:51.263 "traddr": "10.0.0.2", 00:36:51.263 "adrfam": "ipv4", 00:36:51.263 "trsvcid": "4420", 00:36:51.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:51.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:51.263 "hdgst": false, 00:36:51.263 "ddgst": false 00:36:51.263 }, 00:36:51.263 "method": "bdev_nvme_attach_controller" 00:36:51.263 }' 00:36:51.263 [2024-10-13 20:06:40.974745] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:51.263 [2024-10-13 20:06:40.974901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155650 ] 00:36:51.523 [2024-10-13 20:06:41.103666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.523 [2024-10-13 20:06:41.230668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.090 Running I/O for 1 seconds... 00:36:53.282 6024.00 IOPS, 23.53 MiB/s 00:36:53.282 Latency(us) 00:36:53.282 [2024-10-13T18:06:43.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.282 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:53.282 Verification LBA range: start 0x0 length 0x4000 00:36:53.282 Nvme1n1 : 1.02 6047.48 23.62 0.00 0.00 21058.24 4708.88 17282.09 00:36:53.282 [2024-10-13T18:06:43.097Z] =================================================================================================================== 00:36:53.282 [2024-10-13T18:06:43.097Z] Total : 6047.48 23.62 0.00 0.00 21058.24 4708.88 17282.09 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3155926 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:54.221 { 00:36:54.221 "params": { 00:36:54.221 "name": "Nvme$subsystem", 00:36:54.221 "trtype": "$TEST_TRANSPORT", 00:36:54.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.221 "adrfam": "ipv4", 00:36:54.221 "trsvcid": "$NVMF_PORT", 00:36:54.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.221 "hdgst": ${hdgst:-false}, 00:36:54.221 "ddgst": ${ddgst:-false} 00:36:54.221 }, 00:36:54.221 "method": "bdev_nvme_attach_controller" 00:36:54.221 } 00:36:54.221 EOF 00:36:54.221 )") 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:36:54.221 20:06:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:54.221 "params": { 00:36:54.221 "name": "Nvme1", 00:36:54.221 "trtype": "tcp", 00:36:54.221 "traddr": "10.0.0.2", 00:36:54.221 "adrfam": "ipv4", 00:36:54.221 "trsvcid": "4420", 00:36:54.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:54.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:54.221 "hdgst": false, 00:36:54.221 "ddgst": false 00:36:54.221 }, 00:36:54.221 "method": "bdev_nvme_attach_controller" 00:36:54.221 }' 00:36:54.221 [2024-10-13 20:06:43.765749] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:54.221 [2024-10-13 20:06:43.765893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155926 ] 00:36:54.221 [2024-10-13 20:06:43.901290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.221 [2024-10-13 20:06:44.028392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.787 Running I/O for 15 seconds... 00:36:57.103 6110.00 IOPS, 23.87 MiB/s [2024-10-13T18:06:46.918Z] 6267.50 IOPS, 24.48 MiB/s [2024-10-13T18:06:46.918Z] 20:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3155496 00:36:57.103 20:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:57.103 [2024-10-13 20:06:46.707762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.103 [2024-10-13 20:06:46.707874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.707932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.707961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.707990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.708015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.708043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.708069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.708098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.708124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.708153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.708178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.708206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.708231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.708258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.708282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.708309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.708333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.103 [2024-10-13 20:06:46.708360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.103 [2024-10-13 20:06:46.708410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.708966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.708990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.104 [2024-10-13 20:06:46.709731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.104 [2024-10-13 20:06:46.709957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.104 [2024-10-13 20:06:46.709985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.105 [2024-10-13 20:06:46.710969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.710996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.105 [2024-10-13 20:06:46.711559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.105 [2024-10-13 20:06:46.711582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.711603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.711625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.711646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.711684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.711714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.711752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.711778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.711805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.711843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.711873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.711898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.711925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.711950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.711982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.712007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.712060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.712111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.712164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.712216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.106 [2024-10-13 20:06:46.712268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.712965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.712992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.713017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.713044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.713068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.106 [2024-10-13 20:06:46.713095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.106 [2024-10-13 20:06:46.713120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:57.107 [2024-10-13 20:06:46.713965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.713992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.107 [2024-10-13 20:06:46.714723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.107 [2024-10-13 20:06:46.714759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:36:57.107 [2024-10-13 20:06:46.714791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:57.107 [2024-10-13 20:06:46.714813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:57.107 [2024-10-13 20:06:46.714844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110880 len:8 PRP1 0x0 PRP2 0x0 00:36:57.108 [2024-10-13 20:06:46.714868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.108 [2024-10-13 20:06:46.715181] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:36:57.108 [2024-10-13 20:06:46.715296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.108 [2024-10-13 20:06:46.715329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.108 [2024-10-13 20:06:46.715356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.108 [2024-10-13 20:06:46.715391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.108 [2024-10-13 20:06:46.715443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.108 [2024-10-13 20:06:46.715465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.108 [2024-10-13 20:06:46.715486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.108 [2024-10-13 20:06:46.715507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.108 [2024-10-13 20:06:46.715527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.108 [2024-10-13 20:06:46.719710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.108 [2024-10-13 20:06:46.719791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.108 [2024-10-13 20:06:46.720553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.108 [2024-10-13 20:06:46.720609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.108 [2024-10-13 20:06:46.720638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.108 [2024-10-13 20:06:46.720934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.108 [2024-10-13 20:06:46.721237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.108 [2024-10-13 20:06:46.721271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.108 [2024-10-13 20:06:46.721311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.108 [2024-10-13 20:06:46.725490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.108 [2024-10-13 20:06:46.734517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.108 [2024-10-13 20:06:46.735081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.108 [2024-10-13 20:06:46.735135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.108 [2024-10-13 20:06:46.735162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.108 [2024-10-13 20:06:46.735472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.108 [2024-10-13 20:06:46.735758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.108 [2024-10-13 20:06:46.735790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.108 [2024-10-13 20:06:46.735813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.108 [2024-10-13 20:06:46.739976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.108 [2024-10-13 20:06:46.749082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.108 [2024-10-13 20:06:46.749583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.108 [2024-10-13 20:06:46.749627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.108 [2024-10-13 20:06:46.749655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.108 [2024-10-13 20:06:46.749941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.108 [2024-10-13 20:06:46.750241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.108 [2024-10-13 20:06:46.750273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.108 [2024-10-13 20:06:46.750295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.108 [2024-10-13 20:06:46.754404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.108 [2024-10-13 20:06:46.763541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.108 [2024-10-13 20:06:46.764006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.108 [2024-10-13 20:06:46.764056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.108 [2024-10-13 20:06:46.764083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.108 [2024-10-13 20:06:46.764370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.108 [2024-10-13 20:06:46.764666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.108 [2024-10-13 20:06:46.764698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.108 [2024-10-13 20:06:46.764721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.108 [2024-10-13 20:06:46.768811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.108 [2024-10-13 20:06:46.777960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.108 [2024-10-13 20:06:46.778449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.108 [2024-10-13 20:06:46.778498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.108 [2024-10-13 20:06:46.778525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.108 [2024-10-13 20:06:46.778808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.108 [2024-10-13 20:06:46.779098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.108 [2024-10-13 20:06:46.779130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.108 [2024-10-13 20:06:46.779152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.108 [2024-10-13 20:06:46.783221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.108 [2024-10-13 20:06:46.792535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.108 [2024-10-13 20:06:46.792991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.108 [2024-10-13 20:06:46.793040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.108 [2024-10-13 20:06:46.793067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.108 [2024-10-13 20:06:46.793349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.108 [2024-10-13 20:06:46.793642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.108 [2024-10-13 20:06:46.793675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.108 [2024-10-13 20:06:46.793698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.108 [2024-10-13 20:06:46.797763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.108 [2024-10-13 20:06:46.807112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.108 [2024-10-13 20:06:46.807597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.109 [2024-10-13 20:06:46.807649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.109 [2024-10-13 20:06:46.807676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.109 [2024-10-13 20:06:46.807956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.109 [2024-10-13 20:06:46.808240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.109 [2024-10-13 20:06:46.808271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.109 [2024-10-13 20:06:46.808294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.109 [2024-10-13 20:06:46.812360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.109 [2024-10-13 20:06:46.821688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.109 [2024-10-13 20:06:46.822164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.109 [2024-10-13 20:06:46.822214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.109 [2024-10-13 20:06:46.822241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.109 [2024-10-13 20:06:46.822537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.109 [2024-10-13 20:06:46.822821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.109 [2024-10-13 20:06:46.822853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.109 [2024-10-13 20:06:46.822882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.109 [2024-10-13 20:06:46.826965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.109 [2024-10-13 20:06:46.836261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.109 [2024-10-13 20:06:46.836749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.109 [2024-10-13 20:06:46.836799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.109 [2024-10-13 20:06:46.836826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.109 [2024-10-13 20:06:46.837109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.109 [2024-10-13 20:06:46.837391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.109 [2024-10-13 20:06:46.837434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.109 [2024-10-13 20:06:46.837456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.109 [2024-10-13 20:06:46.841522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.109 [2024-10-13 20:06:46.850623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.109 [2024-10-13 20:06:46.851117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.109 [2024-10-13 20:06:46.851167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.109 [2024-10-13 20:06:46.851194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.109 [2024-10-13 20:06:46.851487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.109 [2024-10-13 20:06:46.851771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.109 [2024-10-13 20:06:46.851802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.109 [2024-10-13 20:06:46.851824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.109 [2024-10-13 20:06:46.855903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.109 [2024-10-13 20:06:46.864999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.109 [2024-10-13 20:06:46.865432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.109 [2024-10-13 20:06:46.865481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.109 [2024-10-13 20:06:46.865507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.109 [2024-10-13 20:06:46.865789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.109 [2024-10-13 20:06:46.866072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.109 [2024-10-13 20:06:46.866104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.109 [2024-10-13 20:06:46.866126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.109 [2024-10-13 20:06:46.870326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.109 [2024-10-13 20:06:46.879392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.109 [2024-10-13 20:06:46.879891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.109 [2024-10-13 20:06:46.879945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.109 [2024-10-13 20:06:46.879973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.109 [2024-10-13 20:06:46.880256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.109 [2024-10-13 20:06:46.880552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.109 [2024-10-13 20:06:46.880585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.109 [2024-10-13 20:06:46.880608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.109 [2024-10-13 20:06:46.884670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.109 [2024-10-13 20:06:46.893949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.109 [2024-10-13 20:06:46.894427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.109 [2024-10-13 20:06:46.894483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.109 [2024-10-13 20:06:46.894512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.109 [2024-10-13 20:06:46.894795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.109 [2024-10-13 20:06:46.895077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.109 [2024-10-13 20:06:46.895108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.109 [2024-10-13 20:06:46.895131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.109 [2024-10-13 20:06:46.899170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.109 [2024-10-13 20:06:46.908441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.110 [2024-10-13 20:06:46.908910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.110 [2024-10-13 20:06:46.908959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.110 [2024-10-13 20:06:46.908985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.110 [2024-10-13 20:06:46.909267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.110 [2024-10-13 20:06:46.909561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.110 [2024-10-13 20:06:46.909594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.110 [2024-10-13 20:06:46.909616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.110 [2024-10-13 20:06:46.913674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.369 [2024-10-13 20:06:46.922954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.369 [2024-10-13 20:06:46.923426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.369 [2024-10-13 20:06:46.923473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.369 [2024-10-13 20:06:46.923500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.369 [2024-10-13 20:06:46.923780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.369 [2024-10-13 20:06:46.924068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.369 [2024-10-13 20:06:46.924100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.369 [2024-10-13 20:06:46.924142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.369 [2024-10-13 20:06:46.928193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.369 [2024-10-13 20:06:46.937447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.369 [2024-10-13 20:06:46.937937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.369 [2024-10-13 20:06:46.937987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.369 [2024-10-13 20:06:46.938013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.369 [2024-10-13 20:06:46.938293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.369 [2024-10-13 20:06:46.938598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.369 [2024-10-13 20:06:46.938631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.369 [2024-10-13 20:06:46.938656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.369 [2024-10-13 20:06:46.942688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.369 [2024-10-13 20:06:46.951962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.369 [2024-10-13 20:06:46.952412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.369 [2024-10-13 20:06:46.952461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.369 [2024-10-13 20:06:46.952487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.369 [2024-10-13 20:06:46.952766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.369 [2024-10-13 20:06:46.953047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.369 [2024-10-13 20:06:46.953078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.369 [2024-10-13 20:06:46.953101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.369 [2024-10-13 20:06:46.957145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.369 [2024-10-13 20:06:46.966424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.369 [2024-10-13 20:06:46.966890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.369 [2024-10-13 20:06:46.966932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.369 [2024-10-13 20:06:46.966960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.369 [2024-10-13 20:06:46.967242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.369 [2024-10-13 20:06:46.967547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.369 [2024-10-13 20:06:46.967579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:46.967609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:46.971650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:46.980906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:46.981326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:46.981369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:46.981413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:46.981696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:46.981977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:46.982009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:46.982031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:46.986081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:46.995402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:46.995924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:46.995977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:46.996004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:46.996285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:46.996578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:46.996610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:46.996633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.000683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.009949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.010431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.010474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.010510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:47.010793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:47.011076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:47.011109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:47.011131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.015174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.024468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.025017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.025080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.025107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:47.025386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:47.025677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:47.025710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:47.025733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.029780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.038838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.039285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.039333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.039360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:47.039665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:47.039959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:47.039990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:47.040013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.044069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.053344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.053831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.053883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.053910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:47.054193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:47.054499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:47.054533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:47.054555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.058597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.067875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.068338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.068391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.068439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:47.068729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:47.069011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:47.069043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:47.069066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.073142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.082381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.082851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.082899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.082925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:47.083207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:47.083521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:47.083563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:47.083587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.087636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.096922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.097370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.097429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.097458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:47.097741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:47.098021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:47.098053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:47.098075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.102116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.111374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.111835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.111883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.111909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.370 [2024-10-13 20:06:47.112191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.370 [2024-10-13 20:06:47.112496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.370 [2024-10-13 20:06:47.112529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.370 [2024-10-13 20:06:47.112559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.370 [2024-10-13 20:06:47.116602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.370 [2024-10-13 20:06:47.125845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.370 [2024-10-13 20:06:47.126300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.370 [2024-10-13 20:06:47.126342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.370 [2024-10-13 20:06:47.126368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.371 [2024-10-13 20:06:47.126667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.371 [2024-10-13 20:06:47.126950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.371 [2024-10-13 20:06:47.126982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.371 [2024-10-13 20:06:47.127004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.371 [2024-10-13 20:06:47.131044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.371 [2024-10-13 20:06:47.140290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.371 [2024-10-13 20:06:47.140739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.371 [2024-10-13 20:06:47.140788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.371 [2024-10-13 20:06:47.140815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.371 [2024-10-13 20:06:47.141094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.371 [2024-10-13 20:06:47.141375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.371 [2024-10-13 20:06:47.141425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.371 [2024-10-13 20:06:47.141452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.371 [2024-10-13 20:06:47.145493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.371 [2024-10-13 20:06:47.154756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.371 [2024-10-13 20:06:47.155200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.371 [2024-10-13 20:06:47.155252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.371 [2024-10-13 20:06:47.155278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.371 [2024-10-13 20:06:47.155582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.371 [2024-10-13 20:06:47.155863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.371 [2024-10-13 20:06:47.155894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.371 [2024-10-13 20:06:47.155917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.371 [2024-10-13 20:06:47.159960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.371 [2024-10-13 20:06:47.169245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.371 [2024-10-13 20:06:47.169730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.371 [2024-10-13 20:06:47.169782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.371 [2024-10-13 20:06:47.169809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.371 [2024-10-13 20:06:47.170094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.371 [2024-10-13 20:06:47.170385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.371 [2024-10-13 20:06:47.170428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.371 [2024-10-13 20:06:47.170451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.371 [2024-10-13 20:06:47.174505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.371 [2024-10-13 20:06:47.183790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.632 [2024-10-13 20:06:47.184253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.632 [2024-10-13 20:06:47.184305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.632 [2024-10-13 20:06:47.184332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.632 [2024-10-13 20:06:47.184635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.632 [2024-10-13 20:06:47.184917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.632 [2024-10-13 20:06:47.184949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.632 [2024-10-13 20:06:47.184972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.632 [2024-10-13 20:06:47.189059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.632 [2024-10-13 20:06:47.198179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.632 [2024-10-13 20:06:47.198626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.632 [2024-10-13 20:06:47.198669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.632 [2024-10-13 20:06:47.198696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.632 [2024-10-13 20:06:47.198978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.632 [2024-10-13 20:06:47.199259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.632 [2024-10-13 20:06:47.199291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.632 [2024-10-13 20:06:47.199314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.632 [2024-10-13 20:06:47.203386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.632 [2024-10-13 20:06:47.212747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.632 [2024-10-13 20:06:47.213211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.632 [2024-10-13 20:06:47.213277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.632 [2024-10-13 20:06:47.213304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.632 [2024-10-13 20:06:47.213620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.632 [2024-10-13 20:06:47.213921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.632 [2024-10-13 20:06:47.213954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.632 [2024-10-13 20:06:47.213977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.632 [2024-10-13 20:06:47.218068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.632 [2024-10-13 20:06:47.227287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.632 [2024-10-13 20:06:47.227769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.632 [2024-10-13 20:06:47.227812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.632 [2024-10-13 20:06:47.227846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.632 [2024-10-13 20:06:47.228129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.632 [2024-10-13 20:06:47.228444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.632 [2024-10-13 20:06:47.228489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.632 [2024-10-13 20:06:47.228513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.632 [2024-10-13 20:06:47.232616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.632 [2024-10-13 20:06:47.241833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.632 [2024-10-13 20:06:47.242313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.632 [2024-10-13 20:06:47.242354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.632 [2024-10-13 20:06:47.242390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.632 [2024-10-13 20:06:47.242688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.632 [2024-10-13 20:06:47.242977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.632 [2024-10-13 20:06:47.243009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.632 [2024-10-13 20:06:47.243033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.632 [2024-10-13 20:06:47.247144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.632 [2024-10-13 20:06:47.256354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.632 [2024-10-13 20:06:47.256837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.632 [2024-10-13 20:06:47.256889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.632 [2024-10-13 20:06:47.256916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.632 [2024-10-13 20:06:47.257197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.632 [2024-10-13 20:06:47.257495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.632 [2024-10-13 20:06:47.257528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.632 [2024-10-13 20:06:47.257568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.632 [2024-10-13 20:06:47.261713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.270871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.271336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.271385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.271422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.271707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.271991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.272022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.272045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.276113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.285477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.285955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.286004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.286030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.286315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.286611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.286644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.286667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.290747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.299910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.300360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.300411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.300442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.300725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.301011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.301044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.301067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.305162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.314267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.314769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.314831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.314858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.315141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.315438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.315472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.315496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.319589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.328713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.329216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.329275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.329302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.329598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.329881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.329914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.329937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.334013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.343126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.343564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.343620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.343648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.343931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.344216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.344249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.344271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.348334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.357724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.358204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.358265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.358292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.358599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.358884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.358917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.358941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.363014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.372103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.372608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.372668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.372695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.372977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.373259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.373292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.373315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.377385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.386500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.387001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.387060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.387087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.387370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.387667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.387702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.633 [2024-10-13 20:06:47.387726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.633 [2024-10-13 20:06:47.391806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.633 [2024-10-13 20:06:47.400886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.633 [2024-10-13 20:06:47.401410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.633 [2024-10-13 20:06:47.401469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.633 [2024-10-13 20:06:47.401496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.633 [2024-10-13 20:06:47.401780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.633 [2024-10-13 20:06:47.402062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.633 [2024-10-13 20:06:47.402094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.634 [2024-10-13 20:06:47.402122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.634 [2024-10-13 20:06:47.406202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.634 [2024-10-13 20:06:47.415297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.634 [2024-10-13 20:06:47.415828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.634 [2024-10-13 20:06:47.415870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.634 [2024-10-13 20:06:47.415897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.634 [2024-10-13 20:06:47.416180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.634 [2024-10-13 20:06:47.416482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.634 [2024-10-13 20:06:47.416516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.634 [2024-10-13 20:06:47.416539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.634 [2024-10-13 20:06:47.420615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.634 [2024-10-13 20:06:47.429703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.634 [2024-10-13 20:06:47.430164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.634 [2024-10-13 20:06:47.430206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.634 [2024-10-13 20:06:47.430233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.634 [2024-10-13 20:06:47.430533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.634 [2024-10-13 20:06:47.430815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.634 [2024-10-13 20:06:47.430850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.634 [2024-10-13 20:06:47.430873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.634 [2024-10-13 20:06:47.434938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.634 [2024-10-13 20:06:47.444274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.634 [2024-10-13 20:06:47.444709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.634 [2024-10-13 20:06:47.444752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.634 [2024-10-13 20:06:47.444780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.634 [2024-10-13 20:06:47.445061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.634 [2024-10-13 20:06:47.445346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.634 [2024-10-13 20:06:47.445379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.634 [2024-10-13 20:06:47.445415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.895 [2024-10-13 20:06:47.449512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.895 [2024-10-13 20:06:47.458828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.895 [2024-10-13 20:06:47.459320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.895 [2024-10-13 20:06:47.459381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.895 [2024-10-13 20:06:47.459421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.895 [2024-10-13 20:06:47.459705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.895 [2024-10-13 20:06:47.459989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.895 [2024-10-13 20:06:47.460022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.895 [2024-10-13 20:06:47.460045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.895 [2024-10-13 20:06:47.464117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.895 [2024-10-13 20:06:47.473210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.895 [2024-10-13 20:06:47.473694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.895 [2024-10-13 20:06:47.473738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.895 [2024-10-13 20:06:47.473765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.895 [2024-10-13 20:06:47.474050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.895 [2024-10-13 20:06:47.474335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.895 [2024-10-13 20:06:47.474368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.895 [2024-10-13 20:06:47.474391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.895 [2024-10-13 20:06:47.478492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.895 [2024-10-13 20:06:47.487812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.895 [2024-10-13 20:06:47.488260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.895 [2024-10-13 20:06:47.488304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.895 [2024-10-13 20:06:47.488331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.895 [2024-10-13 20:06:47.488624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.895 [2024-10-13 20:06:47.488909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.895 [2024-10-13 20:06:47.488942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.895 [2024-10-13 20:06:47.488966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.895 [2024-10-13 20:06:47.493036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.895 4604.33 IOPS, 17.99 MiB/s [2024-10-13T18:06:47.710Z] [2024-10-13 20:06:47.502260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.895 [2024-10-13 20:06:47.502734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.895 [2024-10-13 20:06:47.502794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.502822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.503112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.503406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.503440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.503463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.507522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.516813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.517256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.517298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.517326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.517624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.517906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.517940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.517963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.522006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.531278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.531734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.531776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.531802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.532083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.532364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.532408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.532434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.536473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.545730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.546188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.546230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.546257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.546568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.546850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.546889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.546914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.551011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.560275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.560736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.560780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.560807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.561087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.561369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.561414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.561440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.565502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.574765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.575214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.575257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.575284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.575582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.575864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.575897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.575921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.579959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.589221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.589690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.589733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.589760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.590041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.590320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.590353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.590377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.594428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.603677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.604098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.604141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.604168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.604465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.604748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.604782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.604804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.608851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.618103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.618562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.618603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.618629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.618910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.619190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.619224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.619247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.623303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.632558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.633050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.633093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.633120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.633413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.633695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.633729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.633753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.896 [2024-10-13 20:06:47.637790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.896 [2024-10-13 20:06:47.647051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.896 [2024-10-13 20:06:47.647499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.896 [2024-10-13 20:06:47.647541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.896 [2024-10-13 20:06:47.647574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.896 [2024-10-13 20:06:47.647858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.896 [2024-10-13 20:06:47.648138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.896 [2024-10-13 20:06:47.648171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.896 [2024-10-13 20:06:47.648195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.897 [2024-10-13 20:06:47.652261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.897 [2024-10-13 20:06:47.661522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.897 [2024-10-13 20:06:47.661978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.897 [2024-10-13 20:06:47.662019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.897 [2024-10-13 20:06:47.662046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.897 [2024-10-13 20:06:47.662327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.897 [2024-10-13 20:06:47.662624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.897 [2024-10-13 20:06:47.662657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.897 [2024-10-13 20:06:47.662680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.897 [2024-10-13 20:06:47.666718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.897 [2024-10-13 20:06:47.675972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.897 [2024-10-13 20:06:47.676403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.897 [2024-10-13 20:06:47.676445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.897 [2024-10-13 20:06:47.676472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.897 [2024-10-13 20:06:47.676754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.897 [2024-10-13 20:06:47.677034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.897 [2024-10-13 20:06:47.677066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.897 [2024-10-13 20:06:47.677089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.897 [2024-10-13 20:06:47.681135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.897 [2024-10-13 20:06:47.690407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.897 [2024-10-13 20:06:47.690840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.897 [2024-10-13 20:06:47.690883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.897 [2024-10-13 20:06:47.690909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.897 [2024-10-13 20:06:47.691192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.897 [2024-10-13 20:06:47.691490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.897 [2024-10-13 20:06:47.691529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.897 [2024-10-13 20:06:47.691554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:57.897 [2024-10-13 20:06:47.695590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:57.897 [2024-10-13 20:06:47.704857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:57.897 [2024-10-13 20:06:47.705292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.897 [2024-10-13 20:06:47.705335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:57.897 [2024-10-13 20:06:47.705363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:57.897 [2024-10-13 20:06:47.705661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:57.897 [2024-10-13 20:06:47.705944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:57.897 [2024-10-13 20:06:47.705978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:57.897 [2024-10-13 20:06:47.706002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.159 [2024-10-13 20:06:47.710050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.159 [2024-10-13 20:06:47.719580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.159 [2024-10-13 20:06:47.720028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.159 [2024-10-13 20:06:47.720070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.159 [2024-10-13 20:06:47.720096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.159 [2024-10-13 20:06:47.720377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.159 [2024-10-13 20:06:47.720672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.159 [2024-10-13 20:06:47.720706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.159 [2024-10-13 20:06:47.720729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.159 [2024-10-13 20:06:47.724770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.159 [2024-10-13 20:06:47.734040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.159 [2024-10-13 20:06:47.734449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.159 [2024-10-13 20:06:47.734493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.159 [2024-10-13 20:06:47.734521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.159 [2024-10-13 20:06:47.734804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.159 [2024-10-13 20:06:47.735088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.159 [2024-10-13 20:06:47.735120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.159 [2024-10-13 20:06:47.735143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.159 [2024-10-13 20:06:47.739194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.159 [2024-10-13 20:06:47.748463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.159 [2024-10-13 20:06:47.748920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.159 [2024-10-13 20:06:47.748962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.159 [2024-10-13 20:06:47.748989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.159 [2024-10-13 20:06:47.749269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.159 [2024-10-13 20:06:47.749564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.159 [2024-10-13 20:06:47.749634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.159 [2024-10-13 20:06:47.749657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.159 [2024-10-13 20:06:47.753740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.159 [2024-10-13 20:06:47.763028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.159 [2024-10-13 20:06:47.763468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.159 [2024-10-13 20:06:47.763511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.159 [2024-10-13 20:06:47.763538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.159 [2024-10-13 20:06:47.763819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.159 [2024-10-13 20:06:47.764102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.159 [2024-10-13 20:06:47.764134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.159 [2024-10-13 20:06:47.764157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.159 [2024-10-13 20:06:47.768198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.159 [2024-10-13 20:06:47.777463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.159 [2024-10-13 20:06:47.777942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.159 [2024-10-13 20:06:47.777984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.159 [2024-10-13 20:06:47.778011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.159 [2024-10-13 20:06:47.778292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.159 [2024-10-13 20:06:47.778602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.159 [2024-10-13 20:06:47.778639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.159 [2024-10-13 20:06:47.778662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.159 [2024-10-13 20:06:47.782693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.159 [2024-10-13 20:06:47.791950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.159 [2024-10-13 20:06:47.792382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.159 [2024-10-13 20:06:47.792433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.159 [2024-10-13 20:06:47.792467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.159 [2024-10-13 20:06:47.792752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.159 [2024-10-13 20:06:47.793036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.159 [2024-10-13 20:06:47.793067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.793090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.797138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.806412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.806841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.806883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.806911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.807195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.807490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.807522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.807545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.811580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.820833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.821300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.821342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.821369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.821661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.821944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.821976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.821998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.826029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.835250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.835676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.835719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.835745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.836027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.836309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.836348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.836372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.840431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.849701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.850142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.850184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.850211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.850509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.850811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.850844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.850868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.854933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.864200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.864628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.864680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.864710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.864993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.865275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.865307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.865329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.869369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.878662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.879120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.879162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.879188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.879481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.879764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.879796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.879818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.883877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.893355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.893804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.893849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.893876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.894159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.894456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.894490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.894513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.898556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.907825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.908239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.908281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.908308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.908606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.908888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.908922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.908945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.912988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.922243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.922677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.922718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.922746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.923028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.923311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.923344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.923367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.927424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.936680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.937132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.937175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.937211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.160 [2024-10-13 20:06:47.937520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.160 [2024-10-13 20:06:47.937801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.160 [2024-10-13 20:06:47.937833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.160 [2024-10-13 20:06:47.937855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.160 [2024-10-13 20:06:47.941891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.160 [2024-10-13 20:06:47.951172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.160 [2024-10-13 20:06:47.951601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.160 [2024-10-13 20:06:47.951644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.160 [2024-10-13 20:06:47.951671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.161 [2024-10-13 20:06:47.951951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.161 [2024-10-13 20:06:47.952231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.161 [2024-10-13 20:06:47.952263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.161 [2024-10-13 20:06:47.952287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.161 [2024-10-13 20:06:47.956343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.161 [2024-10-13 20:06:47.965592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.161 [2024-10-13 20:06:47.966017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.161 [2024-10-13 20:06:47.966061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.161 [2024-10-13 20:06:47.966089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.161 [2024-10-13 20:06:47.966370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.161 [2024-10-13 20:06:47.966671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.161 [2024-10-13 20:06:47.966705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.161 [2024-10-13 20:06:47.966727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.161 [2024-10-13 20:06:47.970787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:47.980080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:47.980569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:47.980612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:47.980641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:47.980921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:47.981211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:47.981245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:47.981269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:47.985314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:47.994589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:47.995021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:47.995065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:47.995091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:47.995373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:47.995670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:47.995702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:47.995726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:47.999776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.009040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.009496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.009538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.009566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.009849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.010130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.010162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.010184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:48.014237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.023569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.024029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.024071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.024097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.024379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.024674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.024706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.024728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:48.028782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.038073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.038515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.038566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.038592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.038874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.039156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.039189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.039211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:48.043263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.052591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.053033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.053076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.053102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.053384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.053680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.053713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.053736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:48.057782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.067052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.067478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.067521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.067548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.067830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.068109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.068141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.068163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:48.072205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.081471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.081926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.081975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.082007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.082291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.082600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.082634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.082657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:48.086690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.095941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.096407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.096449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.096476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.096758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.097040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.097073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.097096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:48.101140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.110411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.110840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.110883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.110910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.111193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.111491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.111525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.111548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.421 [2024-10-13 20:06:48.115597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.421 [2024-10-13 20:06:48.124849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.421 [2024-10-13 20:06:48.125306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.421 [2024-10-13 20:06:48.125347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.421 [2024-10-13 20:06:48.125373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.421 [2024-10-13 20:06:48.125666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.421 [2024-10-13 20:06:48.125955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.421 [2024-10-13 20:06:48.125987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.421 [2024-10-13 20:06:48.126010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.422 [2024-10-13 20:06:48.130054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.422 [2024-10-13 20:06:48.139299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.422 [2024-10-13 20:06:48.139727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.422 [2024-10-13 20:06:48.139769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.422 [2024-10-13 20:06:48.139796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.422 [2024-10-13 20:06:48.140077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.422 [2024-10-13 20:06:48.140358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.422 [2024-10-13 20:06:48.140390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.422 [2024-10-13 20:06:48.140428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.422 [2024-10-13 20:06:48.144463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.422 [2024-10-13 20:06:48.153758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.422 [2024-10-13 20:06:48.154209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.422 [2024-10-13 20:06:48.154250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.422 [2024-10-13 20:06:48.154276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.422 [2024-10-13 20:06:48.154570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.422 [2024-10-13 20:06:48.154850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.422 [2024-10-13 20:06:48.154882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.422 [2024-10-13 20:06:48.154904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.422 [2024-10-13 20:06:48.158942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.422 [2024-10-13 20:06:48.168206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.422 [2024-10-13 20:06:48.168688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.422 [2024-10-13 20:06:48.168732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.422 [2024-10-13 20:06:48.168759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.422 [2024-10-13 20:06:48.169042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.422 [2024-10-13 20:06:48.169324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.422 [2024-10-13 20:06:48.169357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.422 [2024-10-13 20:06:48.169380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.422 [2024-10-13 20:06:48.173456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.422 [2024-10-13 20:06:48.182724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.422 [2024-10-13 20:06:48.183173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.422 [2024-10-13 20:06:48.183214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.422 [2024-10-13 20:06:48.183240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.422 [2024-10-13 20:06:48.183537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.422 [2024-10-13 20:06:48.183818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.422 [2024-10-13 20:06:48.183851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.422 [2024-10-13 20:06:48.183873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.422 [2024-10-13 20:06:48.187912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.422 [2024-10-13 20:06:48.197175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.422 [2024-10-13 20:06:48.197640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.422 [2024-10-13 20:06:48.197682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.422 [2024-10-13 20:06:48.197709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.422 [2024-10-13 20:06:48.197989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.422 [2024-10-13 20:06:48.198271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.422 [2024-10-13 20:06:48.198303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.422 [2024-10-13 20:06:48.198327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.422 [2024-10-13 20:06:48.202374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.422 [2024-10-13 20:06:48.211644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.422 [2024-10-13 20:06:48.212075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.422 [2024-10-13 20:06:48.212117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.422 [2024-10-13 20:06:48.212145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.422 [2024-10-13 20:06:48.212441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.422 [2024-10-13 20:06:48.212722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.422 [2024-10-13 20:06:48.212755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.422 [2024-10-13 20:06:48.212777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.422 [2024-10-13 20:06:48.216822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.422 [2024-10-13 20:06:48.226079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.422 [2024-10-13 20:06:48.226516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.422 [2024-10-13 20:06:48.226558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.422 [2024-10-13 20:06:48.226591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.422 [2024-10-13 20:06:48.226873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.422 [2024-10-13 20:06:48.227155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.422 [2024-10-13 20:06:48.227187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.422 [2024-10-13 20:06:48.227210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.422 [2024-10-13 20:06:48.231249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.683 [2024-10-13 20:06:48.240554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.683 [2024-10-13 20:06:48.241032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.683 [2024-10-13 20:06:48.241074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.683 [2024-10-13 20:06:48.241101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.683 [2024-10-13 20:06:48.241382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.683 [2024-10-13 20:06:48.241679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.683 [2024-10-13 20:06:48.241712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.683 [2024-10-13 20:06:48.241737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.683 [2024-10-13 20:06:48.245786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.683 [2024-10-13 20:06:48.255082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.683 [2024-10-13 20:06:48.255553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.683 [2024-10-13 20:06:48.255595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.683 [2024-10-13 20:06:48.255621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.683 [2024-10-13 20:06:48.255903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.683 [2024-10-13 20:06:48.256185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.683 [2024-10-13 20:06:48.256218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.683 [2024-10-13 20:06:48.256241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.683 [2024-10-13 20:06:48.260291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.683 [2024-10-13 20:06:48.269555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.683 [2024-10-13 20:06:48.270009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.683 [2024-10-13 20:06:48.270051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.683 [2024-10-13 20:06:48.270078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.683 [2024-10-13 20:06:48.270360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.683 [2024-10-13 20:06:48.270662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.683 [2024-10-13 20:06:48.270696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.683 [2024-10-13 20:06:48.270719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.683 [2024-10-13 20:06:48.274760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.683 [2024-10-13 20:06:48.284013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.683 [2024-10-13 20:06:48.284468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.683 [2024-10-13 20:06:48.284510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.683 [2024-10-13 20:06:48.284538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.683 [2024-10-13 20:06:48.284819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.683 [2024-10-13 20:06:48.285101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.683 [2024-10-13 20:06:48.285134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.683 [2024-10-13 20:06:48.285157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.683 [2024-10-13 20:06:48.289199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.683 [2024-10-13 20:06:48.298485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.683 [2024-10-13 20:06:48.298901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.683 [2024-10-13 20:06:48.298943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.683 [2024-10-13 20:06:48.298970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.683 [2024-10-13 20:06:48.299250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.683 [2024-10-13 20:06:48.299548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.683 [2024-10-13 20:06:48.299580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.683 [2024-10-13 20:06:48.299603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.683 [2024-10-13 20:06:48.303645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.683 [2024-10-13 20:06:48.312942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.683 [2024-10-13 20:06:48.313384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.683 [2024-10-13 20:06:48.313445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.683 [2024-10-13 20:06:48.313473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.683 [2024-10-13 20:06:48.313754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.683 [2024-10-13 20:06:48.314037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.683 [2024-10-13 20:06:48.314069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.683 [2024-10-13 20:06:48.314091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.683 [2024-10-13 20:06:48.318148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.683 [2024-10-13 20:06:48.327520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.683 [2024-10-13 20:06:48.327934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.683 [2024-10-13 20:06:48.327976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.683 [2024-10-13 20:06:48.328002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.683 [2024-10-13 20:06:48.328284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.683 [2024-10-13 20:06:48.328589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.683 [2024-10-13 20:06:48.328622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.683 [2024-10-13 20:06:48.328645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.683 [2024-10-13 20:06:48.332711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.683 [2024-10-13 20:06:48.342050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.683 [2024-10-13 20:06:48.342478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.683 [2024-10-13 20:06:48.342519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.683 [2024-10-13 20:06:48.342545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.683 [2024-10-13 20:06:48.342827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.683 [2024-10-13 20:06:48.343109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.683 [2024-10-13 20:06:48.343146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.683 [2024-10-13 20:06:48.343169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.683 [2024-10-13 20:06:48.347248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.356698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.357159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.357201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.357227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.357526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.357810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.357842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.357864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.361954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.371071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.371500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.371548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.371588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.371874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.372157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.372188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.372210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.376316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.385453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.385849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.385893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.385920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.386209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.386505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.386537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.386559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.390641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.399963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.400424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.400478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.400504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.400788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.401071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.401104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.401128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.405200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.414537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.414956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.414998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.415024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.415319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.415626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.415659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.415683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.419751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.428906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.429365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.429419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.429449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.429734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.430020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.430051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.430073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.434140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.443463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.443930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.443972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.443999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.444282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.444587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.444620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.444643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.448721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.457880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.458335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.458379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.458417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.458722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.459008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.459040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.459063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.463162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.472318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.472801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.472844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.472871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.473153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.473452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.473486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.473509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.477601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.684 [2024-10-13 20:06:48.486734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.684 [2024-10-13 20:06:48.487177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.684 [2024-10-13 20:06:48.487226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.684 [2024-10-13 20:06:48.487256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.684 [2024-10-13 20:06:48.487552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.684 [2024-10-13 20:06:48.487836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.684 [2024-10-13 20:06:48.487869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.684 [2024-10-13 20:06:48.487892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.684 [2024-10-13 20:06:48.491968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.945 3453.25 IOPS, 13.49 MiB/s [2024-10-13T18:06:48.760Z] [2024-10-13 20:06:48.502463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.945 [2024-10-13 20:06:48.502914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.945 [2024-10-13 20:06:48.502957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.945 [2024-10-13 20:06:48.502984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.945 [2024-10-13 20:06:48.503269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.945 [2024-10-13 20:06:48.503569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.945 [2024-10-13 20:06:48.503604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.945 [2024-10-13 20:06:48.503627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.945 [2024-10-13 20:06:48.507695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.945 [2024-10-13 20:06:48.517038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.517501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.517551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.517580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.517865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.518152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.518185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.518208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.522278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.531639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.532096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.532138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.532165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.532462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.532746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.532778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.532801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.536878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.546225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.546635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.546678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.546704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.546986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.547268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.547301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.547324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.551414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.560841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.561313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.561357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.561384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.561689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.561974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.562007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.562030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.566131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.575308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.575787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.575830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.575857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.576141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.576456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.576490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.576513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.580607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.589759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.590234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.590277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.590304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.590599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.590883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.590917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.590940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.595032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.604109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.604571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.604614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.604641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.604923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.605207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.605240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.605270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.609344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.618669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.619129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.619172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.619199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.619498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.619781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.619814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.619836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.623904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.633221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.633690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.633733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.633760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.634043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.634328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.634361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.634384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.638466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.647779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.648228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.648269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.648297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.648593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.648875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.946 [2024-10-13 20:06:48.648908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.946 [2024-10-13 20:06:48.648931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.946 [2024-10-13 20:06:48.652997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.946 [2024-10-13 20:06:48.662362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.946 [2024-10-13 20:06:48.662797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.946 [2024-10-13 20:06:48.662840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.946 [2024-10-13 20:06:48.662868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.946 [2024-10-13 20:06:48.663150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.946 [2024-10-13 20:06:48.663447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.947 [2024-10-13 20:06:48.663482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.947 [2024-10-13 20:06:48.663505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.947 [2024-10-13 20:06:48.667567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.947 [2024-10-13 20:06:48.676894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.947 [2024-10-13 20:06:48.677366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.947 [2024-10-13 20:06:48.677416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.947 [2024-10-13 20:06:48.677444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.947 [2024-10-13 20:06:48.677729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.947 [2024-10-13 20:06:48.678012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.947 [2024-10-13 20:06:48.678045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.947 [2024-10-13 20:06:48.678070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.947 [2024-10-13 20:06:48.682159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.947 [2024-10-13 20:06:48.691496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.947 [2024-10-13 20:06:48.691953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.947 [2024-10-13 20:06:48.691994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.947 [2024-10-13 20:06:48.692021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.947 [2024-10-13 20:06:48.692304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.947 [2024-10-13 20:06:48.692598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.947 [2024-10-13 20:06:48.692631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.947 [2024-10-13 20:06:48.692654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.947 [2024-10-13 20:06:48.696717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.947 [2024-10-13 20:06:48.706042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.947 [2024-10-13 20:06:48.706488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.947 [2024-10-13 20:06:48.706532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.947 [2024-10-13 20:06:48.706559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.947 [2024-10-13 20:06:48.706851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.947 [2024-10-13 20:06:48.707134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.947 [2024-10-13 20:06:48.707167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.947 [2024-10-13 20:06:48.707190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.947 [2024-10-13 20:06:48.711249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.947 [2024-10-13 20:06:48.720576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.947 [2024-10-13 20:06:48.721028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.947 [2024-10-13 20:06:48.721071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.947 [2024-10-13 20:06:48.721098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.947 [2024-10-13 20:06:48.721381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.947 [2024-10-13 20:06:48.721678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.947 [2024-10-13 20:06:48.721711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.947 [2024-10-13 20:06:48.721734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.947 [2024-10-13 20:06:48.725794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.947 [2024-10-13 20:06:48.735402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.947 [2024-10-13 20:06:48.735931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.947 [2024-10-13 20:06:48.735975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.947 [2024-10-13 20:06:48.736003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.947 [2024-10-13 20:06:48.736287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.947 [2024-10-13 20:06:48.736584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.947 [2024-10-13 20:06:48.736619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.947 [2024-10-13 20:06:48.736642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.947 [2024-10-13 20:06:48.740718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:58.947 [2024-10-13 20:06:48.749831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:58.947 [2024-10-13 20:06:48.750293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:58.947 [2024-10-13 20:06:48.750335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:58.947 [2024-10-13 20:06:48.750362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:58.947 [2024-10-13 20:06:48.750654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:58.947 [2024-10-13 20:06:48.750936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:58.947 [2024-10-13 20:06:48.750969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:58.947 [2024-10-13 20:06:48.750999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:58.947 [2024-10-13 20:06:48.755090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.207 [2024-10-13 20:06:48.764423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.207 [2024-10-13 20:06:48.764846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.207 [2024-10-13 20:06:48.764888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.207 [2024-10-13 20:06:48.764915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.207 [2024-10-13 20:06:48.765213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.207 [2024-10-13 20:06:48.765513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.207 [2024-10-13 20:06:48.765547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.207 [2024-10-13 20:06:48.765571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.207 [2024-10-13 20:06:48.769629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.778949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.779369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.779418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.779448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.779732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.780015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.780048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.780088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.784158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.793488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.793960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.794003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.794030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.794313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.794609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.794643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.794667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.798732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.808109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.808573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.808615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.808641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.808924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.809207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.809240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.809264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.813342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.822736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.823181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.823224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.823251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.823548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.823833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.823867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.823890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.827974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.837128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.837535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.837577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.837605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.837889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.838171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.838204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.838227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.842311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.851683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.852133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.852175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.852202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.852504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.852787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.852821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.852844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.856938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.866252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.866761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.866804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.866832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.867114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.867408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.867443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.867467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.871550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.880652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.881106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.881147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.881174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.881466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.881748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.881780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.881803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.885852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.895144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.895599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.895641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.895668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.895949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.896241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.896272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.896301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.900365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.909688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.910158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.208 [2024-10-13 20:06:48.910200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.208 [2024-10-13 20:06:48.910227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.208 [2024-10-13 20:06:48.910525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.208 [2024-10-13 20:06:48.910809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.208 [2024-10-13 20:06:48.910842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.208 [2024-10-13 20:06:48.910864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.208 [2024-10-13 20:06:48.914923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.208 [2024-10-13 20:06:48.924242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.208 [2024-10-13 20:06:48.924695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.209 [2024-10-13 20:06:48.924736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.209 [2024-10-13 20:06:48.924762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.209 [2024-10-13 20:06:48.925045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.209 [2024-10-13 20:06:48.925329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.209 [2024-10-13 20:06:48.925362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.209 [2024-10-13 20:06:48.925384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.209 [2024-10-13 20:06:48.929468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.209 [2024-10-13 20:06:48.938836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.209 [2024-10-13 20:06:48.939268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.209 [2024-10-13 20:06:48.939310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.209 [2024-10-13 20:06:48.939338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.209 [2024-10-13 20:06:48.939655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.209 [2024-10-13 20:06:48.939938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.209 [2024-10-13 20:06:48.939971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.209 [2024-10-13 20:06:48.939994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.209 [2024-10-13 20:06:48.944068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.209 [2024-10-13 20:06:48.953459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.209 [2024-10-13 20:06:48.953926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.209 [2024-10-13 20:06:48.953968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.209 [2024-10-13 20:06:48.953995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.209 [2024-10-13 20:06:48.954280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.209 [2024-10-13 20:06:48.954575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.209 [2024-10-13 20:06:48.954608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.209 [2024-10-13 20:06:48.954632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.209 [2024-10-13 20:06:48.958706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.209 [2024-10-13 20:06:48.968017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.209 [2024-10-13 20:06:48.968435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.209 [2024-10-13 20:06:48.968477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.209 [2024-10-13 20:06:48.968504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.209 [2024-10-13 20:06:48.968786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.209 [2024-10-13 20:06:48.969069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.209 [2024-10-13 20:06:48.969100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.209 [2024-10-13 20:06:48.969123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.209 [2024-10-13 20:06:48.973178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.209 [2024-10-13 20:06:48.982517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.209 [2024-10-13 20:06:48.982981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.209 [2024-10-13 20:06:48.983023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.209 [2024-10-13 20:06:48.983050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.209 [2024-10-13 20:06:48.983335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.209 [2024-10-13 20:06:48.983642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.209 [2024-10-13 20:06:48.983681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.209 [2024-10-13 20:06:48.983705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.209 [2024-10-13 20:06:48.987786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.209 [2024-10-13 20:06:48.996902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.209 [2024-10-13 20:06:48.997348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.209 [2024-10-13 20:06:48.997392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.209 [2024-10-13 20:06:48.997438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.209 [2024-10-13 20:06:48.997730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.209 [2024-10-13 20:06:48.998015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.209 [2024-10-13 20:06:48.998049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.209 [2024-10-13 20:06:48.998073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.209 [2024-10-13 20:06:49.002163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.209 [2024-10-13 20:06:49.011263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.209 [2024-10-13 20:06:49.011701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.209 [2024-10-13 20:06:49.011743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.209 [2024-10-13 20:06:49.011771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.209 [2024-10-13 20:06:49.012055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.209 [2024-10-13 20:06:49.012337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.209 [2024-10-13 20:06:49.012371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.209 [2024-10-13 20:06:49.012404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.209 [2024-10-13 20:06:49.016502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.025858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.026316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.026360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.026387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.026699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.026983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.027015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.027038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.031114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.040499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.040973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.041014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.041041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.041323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.041624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.041668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.041698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.045854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.055028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.055533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.055576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.055603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.055889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.056174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.056207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.056230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.060311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.069501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.069967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.070010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.070036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.070317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.070624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.070658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.070690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.074810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.083932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.084381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.084431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.084458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.084740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.085023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.085057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.085081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.089155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.098496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.098938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.098981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.099009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.099291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.099586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.099620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.099643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.103700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.112857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.113315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.113357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.113383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.113679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.113961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.113994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.114017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.118085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.127407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.127866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.127908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.127935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.128218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.128526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.128559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.128583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.132648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.141955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.142410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.142454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.142480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.142771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.143054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.471 [2024-10-13 20:06:49.143087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.471 [2024-10-13 20:06:49.143110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.471 [2024-10-13 20:06:49.147170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.471 [2024-10-13 20:06:49.156531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.471 [2024-10-13 20:06:49.156980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.471 [2024-10-13 20:06:49.157022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.471 [2024-10-13 20:06:49.157049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.471 [2024-10-13 20:06:49.157331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.471 [2024-10-13 20:06:49.157626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.157660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.157683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.161742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.472 [2024-10-13 20:06:49.171093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.472 [2024-10-13 20:06:49.171560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-10-13 20:06:49.171603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.472 [2024-10-13 20:06:49.171630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.472 [2024-10-13 20:06:49.171914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.472 [2024-10-13 20:06:49.172196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.172230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.172253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.176320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.472 [2024-10-13 20:06:49.185639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.472 [2024-10-13 20:06:49.186089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-10-13 20:06:49.186132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.472 [2024-10-13 20:06:49.186160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.472 [2024-10-13 20:06:49.186455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.472 [2024-10-13 20:06:49.186739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.186779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.186803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.190862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.472 [2024-10-13 20:06:49.200191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.472 [2024-10-13 20:06:49.200651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-10-13 20:06:49.200706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.472 [2024-10-13 20:06:49.200734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.472 [2024-10-13 20:06:49.201018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.472 [2024-10-13 20:06:49.201302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.201335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.201357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.205429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.472 [2024-10-13 20:06:49.214751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.472 [2024-10-13 20:06:49.215211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-10-13 20:06:49.215253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.472 [2024-10-13 20:06:49.215280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.472 [2024-10-13 20:06:49.215574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.472 [2024-10-13 20:06:49.215855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.215888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.215911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.219972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.472 [2024-10-13 20:06:49.229275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.472 [2024-10-13 20:06:49.229738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-10-13 20:06:49.229779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.472 [2024-10-13 20:06:49.229806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.472 [2024-10-13 20:06:49.230089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.472 [2024-10-13 20:06:49.230371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.230415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.230442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.234498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.472 [2024-10-13 20:06:49.243822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.472 [2024-10-13 20:06:49.244281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-10-13 20:06:49.244324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.472 [2024-10-13 20:06:49.244351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.472 [2024-10-13 20:06:49.244645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.472 [2024-10-13 20:06:49.244930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.244962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.244985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.249042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.472 [2024-10-13 20:06:49.258377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.472 [2024-10-13 20:06:49.258854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-10-13 20:06:49.258897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.472 [2024-10-13 20:06:49.258924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.472 [2024-10-13 20:06:49.259206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.472 [2024-10-13 20:06:49.259502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.259536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.259561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.263626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.472 [2024-10-13 20:06:49.272942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.472 [2024-10-13 20:06:49.273403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.472 [2024-10-13 20:06:49.273446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.472 [2024-10-13 20:06:49.273473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.472 [2024-10-13 20:06:49.273756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.472 [2024-10-13 20:06:49.274038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.472 [2024-10-13 20:06:49.274072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.472 [2024-10-13 20:06:49.274095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.472 [2024-10-13 20:06:49.278159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.732 [2024-10-13 20:06:49.287478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.732 [2024-10-13 20:06:49.287947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.732 [2024-10-13 20:06:49.287989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.732 [2024-10-13 20:06:49.288022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.732 [2024-10-13 20:06:49.288306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.732 [2024-10-13 20:06:49.288600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.732 [2024-10-13 20:06:49.288633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.732 [2024-10-13 20:06:49.288656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.732 [2024-10-13 20:06:49.292717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.732 [2024-10-13 20:06:49.302070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.732 [2024-10-13 20:06:49.302524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.732 [2024-10-13 20:06:49.302566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.732 [2024-10-13 20:06:49.302593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.732 [2024-10-13 20:06:49.302879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.732 [2024-10-13 20:06:49.303165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.732 [2024-10-13 20:06:49.303206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.732 [2024-10-13 20:06:49.303229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.732 [2024-10-13 20:06:49.307314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.732 [2024-10-13 20:06:49.316486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.732 [2024-10-13 20:06:49.316915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.732 [2024-10-13 20:06:49.316957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.732 [2024-10-13 20:06:49.316984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.732 [2024-10-13 20:06:49.317266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.732 [2024-10-13 20:06:49.317561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.732 [2024-10-13 20:06:49.317595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.732 [2024-10-13 20:06:49.317619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.732 [2024-10-13 20:06:49.321713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.732 [2024-10-13 20:06:49.331073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.732 [2024-10-13 20:06:49.331522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.732 [2024-10-13 20:06:49.331564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.732 [2024-10-13 20:06:49.331591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.732 [2024-10-13 20:06:49.331875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.732 [2024-10-13 20:06:49.332158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.732 [2024-10-13 20:06:49.332197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.732 [2024-10-13 20:06:49.332220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.732 [2024-10-13 20:06:49.336307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.732 [2024-10-13 20:06:49.345636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.732 [2024-10-13 20:06:49.346090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.732 [2024-10-13 20:06:49.346131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.732 [2024-10-13 20:06:49.346159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.732 [2024-10-13 20:06:49.346452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.732 [2024-10-13 20:06:49.346748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.732 [2024-10-13 20:06:49.346783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.732 [2024-10-13 20:06:49.346806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.732 [2024-10-13 20:06:49.350862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.732 [2024-10-13 20:06:49.360217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.732 [2024-10-13 20:06:49.360662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.732 [2024-10-13 20:06:49.360705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.732 [2024-10-13 20:06:49.360732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.732 [2024-10-13 20:06:49.361016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.732 [2024-10-13 20:06:49.361298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.732 [2024-10-13 20:06:49.361331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.732 [2024-10-13 20:06:49.361354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.732 [2024-10-13 20:06:49.365448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.732 [2024-10-13 20:06:49.374777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.375239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.375280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.375308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.375603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.375887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.375919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.375942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.379993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.389299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.389756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.389797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.389825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.390108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.390390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.390435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.390458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.394525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.403848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.404282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.404324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.404350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.404659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.404942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.404974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.404996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.409135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.418227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.418688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.418733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.418761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.419044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.419326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.419359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.419382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.423469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.432812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.433246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.433288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.433322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.433618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.433900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.433933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.433957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.438017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.447389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.447874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.447916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.447944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.448227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.448529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.448562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.448585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.452685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.461828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.462287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.462328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.462355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.462652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.462946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.462978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.463002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.467079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.476215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.476653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.476695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.476721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.477006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.477291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.477328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.477352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.481478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.490632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.491194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.491257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.491285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.491583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.491869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.491902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.491924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.496039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 2762.60 IOPS, 10.79 MiB/s [2024-10-13T18:06:49.548Z] [2024-10-13 20:06:49.505152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.505581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.505624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.505652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.505938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.506225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.733 [2024-10-13 20:06:49.506258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.733 [2024-10-13 20:06:49.506282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.733 [2024-10-13 20:06:49.510409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.733 [2024-10-13 20:06:49.519603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.733 [2024-10-13 20:06:49.520043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.733 [2024-10-13 20:06:49.520085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.733 [2024-10-13 20:06:49.520112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.733 [2024-10-13 20:06:49.520413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.733 [2024-10-13 20:06:49.520704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.734 [2024-10-13 20:06:49.520736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.734 [2024-10-13 20:06:49.520758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.734 [2024-10-13 20:06:49.524905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.734 [2024-10-13 20:06:49.534064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.734 [2024-10-13 20:06:49.534486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.734 [2024-10-13 20:06:49.534529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.734 [2024-10-13 20:06:49.534557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.734 [2024-10-13 20:06:49.534842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.734 [2024-10-13 20:06:49.535126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.734 [2024-10-13 20:06:49.535157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.734 [2024-10-13 20:06:49.535180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.734 [2024-10-13 20:06:49.539268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.993 [2024-10-13 20:06:49.548702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.993 [2024-10-13 20:06:49.549167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.993 [2024-10-13 20:06:49.549229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.993 [2024-10-13 20:06:49.549257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.993 [2024-10-13 20:06:49.549556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.993 [2024-10-13 20:06:49.549842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.993 [2024-10-13 20:06:49.549873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.993 [2024-10-13 20:06:49.549896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.993 [2024-10-13 20:06:49.554021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.993 [2024-10-13 20:06:49.563282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.993 [2024-10-13 20:06:49.563709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.993 [2024-10-13 20:06:49.563753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.993 [2024-10-13 20:06:49.563780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.993 [2024-10-13 20:06:49.564064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.993 [2024-10-13 20:06:49.564349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.564381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.564425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.568566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 [2024-10-13 20:06:49.577805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.578271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.578313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.578346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.578643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.578929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.578960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.578983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.583107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 [2024-10-13 20:06:49.592266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.592746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.592788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.592816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.593099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.593383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.593430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.593454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.597555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 [2024-10-13 20:06:49.606708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.607157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.607199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.607226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.607523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.607808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.607853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.607875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.611970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 [2024-10-13 20:06:49.621098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.621566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.621608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.621635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.621918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.622209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.622241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.622264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.626347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 [2024-10-13 20:06:49.635476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.635909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.635951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.635979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.636261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.636568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.636602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.636625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.640705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 [2024-10-13 20:06:49.650041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.650487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.650529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.650556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.650840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.651123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.651155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.651177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.655265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 [2024-10-13 20:06:49.664434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.664915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.664957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.664984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.665268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.665586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.665620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.665643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.669729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 [2024-10-13 20:06:49.678799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.679264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.679307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.679334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.679640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.679925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.679956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.994 [2024-10-13 20:06:49.679979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.994 [2024-10-13 20:06:49.684055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3155496 Killed "${NVMF_APP[@]}" "$@" 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3156706 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3156706 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3156706 ']' 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:59.994 20:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:59.994 [2024-10-13 20:06:49.693403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.994 [2024-10-13 20:06:49.693828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.994 [2024-10-13 20:06:49.693870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.994 [2024-10-13 20:06:49.693897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.994 [2024-10-13 20:06:49.694179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.994 [2024-10-13 20:06:49.694488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.994 [2024-10-13 20:06:49.694523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.995 [2024-10-13 20:06:49.694552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.995 [2024-10-13 20:06:49.698628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.995 [2024-10-13 20:06:49.707986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.995 [2024-10-13 20:06:49.708442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.995 [2024-10-13 20:06:49.708485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.995 [2024-10-13 20:06:49.708512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.995 [2024-10-13 20:06:49.708795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.995 [2024-10-13 20:06:49.709079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.995 [2024-10-13 20:06:49.709111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.995 [2024-10-13 20:06:49.709134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.995 [2024-10-13 20:06:49.713224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.995 [2024-10-13 20:06:49.722494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.995 [2024-10-13 20:06:49.722946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.995 [2024-10-13 20:06:49.722998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.995 [2024-10-13 20:06:49.723025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.995 [2024-10-13 20:06:49.723313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.995 [2024-10-13 20:06:49.723612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.995 [2024-10-13 20:06:49.723645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.995 [2024-10-13 20:06:49.723668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.995 [2024-10-13 20:06:49.727871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.995 [2024-10-13 20:06:49.737208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.995 [2024-10-13 20:06:49.737788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.995 [2024-10-13 20:06:49.737848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.995 [2024-10-13 20:06:49.737877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.995 [2024-10-13 20:06:49.738172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.995 [2024-10-13 20:06:49.738475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.995 [2024-10-13 20:06:49.738508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.995 [2024-10-13 20:06:49.738534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.995 [2024-10-13 20:06:49.742715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.995 [2024-10-13 20:06:49.752080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.995 [2024-10-13 20:06:49.752567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.995 [2024-10-13 20:06:49.752632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.995 [2024-10-13 20:06:49.752660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.995 [2024-10-13 20:06:49.752950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.995 [2024-10-13 20:06:49.753241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.995 [2024-10-13 20:06:49.753274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.995 [2024-10-13 20:06:49.753297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.995 [2024-10-13 20:06:49.757501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.995 [2024-10-13 20:06:49.766728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.995 [2024-10-13 20:06:49.767217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.995 [2024-10-13 20:06:49.767271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.995 [2024-10-13 20:06:49.767298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.995 [2024-10-13 20:06:49.767598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.995 [2024-10-13 20:06:49.767890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.995 [2024-10-13 20:06:49.767922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.995 [2024-10-13 20:06:49.767946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.995 [2024-10-13 20:06:49.772133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.995 [2024-10-13 20:06:49.781188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.995 [2024-10-13 20:06:49.781712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.995 [2024-10-13 20:06:49.781764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.995 [2024-10-13 20:06:49.781791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.995 [2024-10-13 20:06:49.782080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.995 [2024-10-13 20:06:49.782371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.995 [2024-10-13 20:06:49.782414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.995 [2024-10-13 20:06:49.782440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.995 [2024-10-13 20:06:49.786625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.995 [2024-10-13 20:06:49.790884] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:36:59.995 [2024-10-13 20:06:49.791021] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:59.995 [2024-10-13 20:06:49.795785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.995 [2024-10-13 20:06:49.796259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.995 [2024-10-13 20:06:49.796309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:59.995 [2024-10-13 20:06:49.796338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:59.995 [2024-10-13 20:06:49.796637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:59.995 [2024-10-13 20:06:49.796933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.995 [2024-10-13 20:06:49.796965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.995 [2024-10-13 20:06:49.796989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.995 [2024-10-13 20:06:49.801188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.255 [2024-10-13 20:06:49.810430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.255 [2024-10-13 20:06:49.810899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.255 [2024-10-13 20:06:49.810947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.255 [2024-10-13 20:06:49.810974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.255 [2024-10-13 20:06:49.811261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.255 [2024-10-13 20:06:49.811563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.255 [2024-10-13 20:06:49.811596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.255 [2024-10-13 20:06:49.811629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.255 [2024-10-13 20:06:49.815912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.255 [2024-10-13 20:06:49.825046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.255 [2024-10-13 20:06:49.825545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.255 [2024-10-13 20:06:49.825598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.255 [2024-10-13 20:06:49.825625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.255 [2024-10-13 20:06:49.825916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.255 [2024-10-13 20:06:49.826208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.255 [2024-10-13 20:06:49.826239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.255 [2024-10-13 20:06:49.826263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.255 [2024-10-13 20:06:49.830469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.255 [2024-10-13 20:06:49.839587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.255 [2024-10-13 20:06:49.840037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.255 [2024-10-13 20:06:49.840090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.255 [2024-10-13 20:06:49.840118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.255 [2024-10-13 20:06:49.840419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.255 [2024-10-13 20:06:49.840719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.255 [2024-10-13 20:06:49.840752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.255 [2024-10-13 20:06:49.840775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.255 [2024-10-13 20:06:49.844948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.255 [2024-10-13 20:06:49.854290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.255 [2024-10-13 20:06:49.854749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.255 [2024-10-13 20:06:49.854792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.255 [2024-10-13 20:06:49.854819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.255 [2024-10-13 20:06:49.855105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.255 [2024-10-13 20:06:49.855405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.255 [2024-10-13 20:06:49.855437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.255 [2024-10-13 20:06:49.855460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.255 [2024-10-13 20:06:49.859676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.255 [2024-10-13 20:06:49.869007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.255 [2024-10-13 20:06:49.869500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.255 [2024-10-13 20:06:49.869552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.255 [2024-10-13 20:06:49.869580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.255 [2024-10-13 20:06:49.869869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.255 [2024-10-13 20:06:49.870157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.255 [2024-10-13 20:06:49.870189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.255 [2024-10-13 20:06:49.870212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.255 [2024-10-13 20:06:49.874372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.255 [2024-10-13 20:06:49.883699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.255 [2024-10-13 20:06:49.884163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.255 [2024-10-13 20:06:49.884214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.255 [2024-10-13 20:06:49.884241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.255 [2024-10-13 20:06:49.884541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.255 [2024-10-13 20:06:49.884830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.255 [2024-10-13 20:06:49.884862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.255 [2024-10-13 20:06:49.884892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.255 [2024-10-13 20:06:49.889069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.255 [2024-10-13 20:06:49.898374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.255 [2024-10-13 20:06:49.898866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.255 [2024-10-13 20:06:49.898918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.255 [2024-10-13 20:06:49.898945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.255 [2024-10-13 20:06:49.899231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.255 [2024-10-13 20:06:49.899544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.255 [2024-10-13 20:06:49.899589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.255 [2024-10-13 20:06:49.899612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.255 [2024-10-13 20:06:49.903756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.255 [2024-10-13 20:06:49.913006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.255 [2024-10-13 20:06:49.913498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.255 [2024-10-13 20:06:49.913551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.255 [2024-10-13 20:06:49.913578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.255 [2024-10-13 20:06:49.913864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.255 [2024-10-13 20:06:49.914150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.255 [2024-10-13 20:06:49.914183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.255 [2024-10-13 20:06:49.914207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:49.918350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:49.927600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:49.928064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:49.928116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:49.928142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:49.928446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:49.928734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:49.928767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:49.928790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:49.932942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:49.942214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:49.942707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:49.942769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:49.942798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:49.943085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:49.943374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:49.943431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:49.943459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:49.947635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:49.956205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:00.256 [2024-10-13 20:06:49.956734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:49.957197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:49.957240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:49.957268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:49.957565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:49.957852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:49.957885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:49.957908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:49.962154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:49.971217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:49.971749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:49.971795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:49.971826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:49.972117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:49.972422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:49.972456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:49.972481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:49.976680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:49.985809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:49.986449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:49.986502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:49.986536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:49.986842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:49.987142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:49.987176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:49.987204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:49.991366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:50.000505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:50.000971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:50.001014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:50.001042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:50.001441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:50.001820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:50.001856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:50.001882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:50.006141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:50.015129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:50.015597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:50.015642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:50.015672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:50.015964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:50.016258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:50.016292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:50.016316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:50.020571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:50.031542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:50.032205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:50.032266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:50.032311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:50.032721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:50.033120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:50.033167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:50.033214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:50.038993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:50.047873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:50.048474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:50.048531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:50.048575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:50.048988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:50.049405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:50.049447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:50.049486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.256 [2024-10-13 20:06:50.055312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.256 [2024-10-13 20:06:50.064478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.256 [2024-10-13 20:06:50.065083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.256 [2024-10-13 20:06:50.065153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.256 [2024-10-13 20:06:50.065208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.256 [2024-10-13 20:06:50.065634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.256 [2024-10-13 20:06:50.066059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.256 [2024-10-13 20:06:50.066112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.256 [2024-10-13 20:06:50.066161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.518 [2024-10-13 20:06:50.072225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.518 [2024-10-13 20:06:50.081497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.518 [2024-10-13 20:06:50.082140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.518 [2024-10-13 20:06:50.082203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.518 [2024-10-13 20:06:50.082249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.518 [2024-10-13 20:06:50.082705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.518 [2024-10-13 20:06:50.083134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.518 [2024-10-13 20:06:50.083182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.518 [2024-10-13 20:06:50.083225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.518 [2024-10-13 20:06:50.089587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.518 [2024-10-13 20:06:50.096950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.518 [2024-10-13 20:06:50.097412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.518 [2024-10-13 20:06:50.097459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.518 [2024-10-13 20:06:50.097485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.518 [2024-10-13 20:06:50.097778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.518 [2024-10-13 20:06:50.098060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.518 [2024-10-13 20:06:50.098090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.518 [2024-10-13 20:06:50.098111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.518 [2024-10-13 20:06:50.102279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.518 [2024-10-13 20:06:50.105593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.518 [2024-10-13 20:06:50.105637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.518 [2024-10-13 20:06:50.105668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.518 [2024-10-13 20:06:50.105689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.518 [2024-10-13 20:06:50.105731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.518 [2024-10-13 20:06:50.108304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:00.518 [2024-10-13 20:06:50.108417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.518 [2024-10-13 20:06:50.108440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:00.518 [2024-10-13 20:06:50.111280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.518 [2024-10-13 20:06:50.111812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.518 [2024-10-13 20:06:50.111855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.518 [2024-10-13 20:06:50.111882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.518 [2024-10-13 20:06:50.112180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.518 [2024-10-13 20:06:50.112485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.518 [2024-10-13 20:06:50.112516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.518 [2024-10-13 20:06:50.112551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.518 [2024-10-13 20:06:50.116521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.518 [2024-10-13 20:06:50.125518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.518 [2024-10-13 20:06:50.126186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.518 [2024-10-13 20:06:50.126238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.518 [2024-10-13 20:06:50.126269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.518 [2024-10-13 20:06:50.126575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.518 [2024-10-13 20:06:50.126848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.518 [2024-10-13 20:06:50.126883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.518 [2024-10-13 20:06:50.126908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.518 [2024-10-13 20:06:50.130682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.518 [2024-10-13 20:06:50.139841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.518 [2024-10-13 20:06:50.140252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.518 [2024-10-13 20:06:50.140291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.518 [2024-10-13 20:06:50.140315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.518 [2024-10-13 20:06:50.140604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.518 [2024-10-13 20:06:50.140875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.518 [2024-10-13 20:06:50.140904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.518 [2024-10-13 20:06:50.140924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.518 [2024-10-13 20:06:50.144672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.518 [2024-10-13 20:06:50.154124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.518 [2024-10-13 20:06:50.154592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.518 [2024-10-13 20:06:50.154633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.518 [2024-10-13 20:06:50.154659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.518 [2024-10-13 20:06:50.154948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.518 [2024-10-13 20:06:50.155196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.518 [2024-10-13 20:06:50.155223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.518 [2024-10-13 20:06:50.155243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.518 [2024-10-13 20:06:50.159229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.518 [2024-10-13 20:06:50.168288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.518 [2024-10-13 20:06:50.168697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.518 [2024-10-13 20:06:50.168736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.518 [2024-10-13 20:06:50.168761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.518 [2024-10-13 20:06:50.169056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.518 [2024-10-13 20:06:50.169305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.518 [2024-10-13 20:06:50.169334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.518 [2024-10-13 20:06:50.169354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.518 [2024-10-13 20:06:50.173168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.182492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.183080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.183124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.183151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.183471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.183764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.183793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.183815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.187626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.196745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.197431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.197490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.197524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.197811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.198069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.198099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.198126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.201947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.211157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.211918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.211976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.212009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.212318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.212613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.212645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.212672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.216447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.225514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.226014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.226058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.226091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.226392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.226677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.226707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.226742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.230553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.239687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.240135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.240176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.240216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.240526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.240802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.240832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.240852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.244645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.253907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.254405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.254445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.254470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.254759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.255002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.255031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.255051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.258780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.268113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.268525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.268566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.268592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.268883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.269132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.269169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.269190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.273154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.282483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.283024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.283065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.283090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.283375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.283652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.283682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.283719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.287620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.296610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.297043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.297083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.297109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.297381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.297640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.297669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.297705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.301348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.310624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.311083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.311122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.311147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.311458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.519 [2024-10-13 20:06:50.311715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.519 [2024-10-13 20:06:50.311746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.519 [2024-10-13 20:06:50.311781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.519 [2024-10-13 20:06:50.315482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.519 [2024-10-13 20:06:50.324666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.519 [2024-10-13 20:06:50.325128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.519 [2024-10-13 20:06:50.325168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.519 [2024-10-13 20:06:50.325194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.519 [2024-10-13 20:06:50.325493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.520 [2024-10-13 20:06:50.325785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.520 [2024-10-13 20:06:50.325817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.520 [2024-10-13 20:06:50.325837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.520 [2024-10-13 20:06:50.329760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.338954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.339612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.339666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.339698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.340002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.340266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.340296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.340322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.344237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.353187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.353968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.354025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.354058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.354364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.354652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.354683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.354726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.358472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.367427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.367914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.367954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.367986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.368276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.368577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.368609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.368631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.372441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.381555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.382003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.382043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.382069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.382357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.382637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.382667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.382703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.386525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.395780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.396251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.396291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.396316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.396611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.396876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.396906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.396927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.400815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.409945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.410359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.410404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.410431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.410719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.410962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.410997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.411019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.414723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.424142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.424536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.424576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.424602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.424871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.425111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.425139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.425175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.428892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.438140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.438579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.438620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.438647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.438932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.439191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.439221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.439241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.443026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.452452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.453024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.780 [2024-10-13 20:06:50.453066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.780 [2024-10-13 20:06:50.453095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.780 [2024-10-13 20:06:50.453416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.780 [2024-10-13 20:06:50.453698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.780 [2024-10-13 20:06:50.453729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.780 [2024-10-13 20:06:50.453750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.780 [2024-10-13 20:06:50.457643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.780 [2024-10-13 20:06:50.466877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.780 [2024-10-13 20:06:50.467322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.467361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.467386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.467660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.467922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.467951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.467970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.471820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 [2024-10-13 20:06:50.480976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.481376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.481423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.481448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.481737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.481982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.482009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.482029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.485784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 [2024-10-13 20:06:50.495132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.495556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.495596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.495622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.495901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.496153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.496196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.496215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.499978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 2302.17 IOPS, 8.99 MiB/s [2024-10-13T18:06:50.596Z] [2024-10-13 20:06:50.509257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.509702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.509741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.509777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.510058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.510318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.510345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.510364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.514131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 [2024-10-13 20:06:50.523345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.523771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.523811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.523836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.524122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.524364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.524418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.524440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.528458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 [2024-10-13 20:06:50.537413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.537833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.537871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.537896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.538180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.538469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.538500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.538521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.542238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 [2024-10-13 20:06:50.551573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.551975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.552012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.552037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.552323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.552642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.552673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.552694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.556391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 [2024-10-13 20:06:50.565633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.566061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.566100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.566125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.566418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.566684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.566712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.566746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.570416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 [2024-10-13 20:06:50.579792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.580229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.580268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.580293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:00.781 [2024-10-13 20:06:50.580561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:00.781 [2024-10-13 20:06:50.580844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.781 [2024-10-13 20:06:50.580872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.781 [2024-10-13 20:06:50.580891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.781 [2024-10-13 20:06:50.584624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.781 [2024-10-13 20:06:50.594024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.781 [2024-10-13 20:06:50.594453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.781 [2024-10-13 20:06:50.594515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:00.781 [2024-10-13 20:06:50.594540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.041 [2024-10-13 20:06:50.594808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.041 [2024-10-13 20:06:50.595073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.041 [2024-10-13 20:06:50.595103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.041 [2024-10-13 20:06:50.595124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.041 [2024-10-13 20:06:50.598929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.041 [2024-10-13 20:06:50.608125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.041 [2024-10-13 20:06:50.608512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-10-13 20:06:50.608551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.041 [2024-10-13 20:06:50.608577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.041 [2024-10-13 20:06:50.608859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.041 [2024-10-13 20:06:50.609119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.041 [2024-10-13 20:06:50.609147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.041 [2024-10-13 20:06:50.609168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.041 [2024-10-13 20:06:50.613039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.041 [2024-10-13 20:06:50.622102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.041 [2024-10-13 20:06:50.622539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-10-13 20:06:50.622578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.041 [2024-10-13 20:06:50.622603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.622886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.623156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.623185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.623205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.626846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.636109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.636568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.636607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.636631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.636915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.637187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.637217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.637251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.640930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.650203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.650619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.650663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.650689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.650983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.651223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.651251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.651270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.654979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.664216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.664657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.664698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.664722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.665006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.665245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.665273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.665292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.668943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.678191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.678604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.678642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.678666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.678950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.679190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.679217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.679237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.682928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.692087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.692485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.692524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.692549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.692815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.693059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.693086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.693106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.696796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.706007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.706429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.706468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.706492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.706760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.706998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.707026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.707045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.710611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.719983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.720411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.720450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.720475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.720745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.720999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.721027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.721046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.724608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.734029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.734404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.734442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.734466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.734750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.735011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.735041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.735061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.738626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.748010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.748473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.748511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.748535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.748820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.749058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.749085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.749105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.752785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 [2024-10-13 20:06:50.762170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.042 [2024-10-13 20:06:50.762598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-10-13 20:06:50.762638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.042 [2024-10-13 20:06:50.762663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.042 [2024-10-13 20:06:50.762933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.042 [2024-10-13 20:06:50.763184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.042 [2024-10-13 20:06:50.763212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.042 [2024-10-13 20:06:50.763232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.042 [2024-10-13 20:06:50.767235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.042 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:01.042 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.043 [2024-10-13 20:06:50.776323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.043 [2024-10-13 20:06:50.776837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.043 [2024-10-13 20:06:50.776878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.043 [2024-10-13 20:06:50.776902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.043 [2024-10-13 20:06:50.777187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.043 [2024-10-13 20:06:50.777478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.043 [2024-10-13 20:06:50.777509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.043 [2024-10-13 20:06:50.777537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.043 [2024-10-13 20:06:50.781349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.043 [2024-10-13 20:06:50.790461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.043 [2024-10-13 20:06:50.790937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.043 [2024-10-13 20:06:50.790976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.043 [2024-10-13 20:06:50.791000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.043 [2024-10-13 20:06:50.791290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.043 [2024-10-13 20:06:50.791575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.043 [2024-10-13 20:06:50.791605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.043 [2024-10-13 20:06:50.791627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.043 [2024-10-13 20:06:50.794705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.043 [2024-10-13 20:06:50.795404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.043 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.043 [2024-10-13 20:06:50.804630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.043 [2024-10-13 20:06:50.805071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.043 [2024-10-13 20:06:50.805116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.043 [2024-10-13 20:06:50.805140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.043 [2024-10-13 20:06:50.805462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.043 [2024-10-13 20:06:50.805743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.043 [2024-10-13 20:06:50.805771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.043 [2024-10-13 20:06:50.805791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.043 [2024-10-13 20:06:50.809619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.043 [2024-10-13 20:06:50.818512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.043 [2024-10-13 20:06:50.818978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.043 [2024-10-13 20:06:50.819025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.043 [2024-10-13 20:06:50.819049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.043 [2024-10-13 20:06:50.819355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.043 [2024-10-13 20:06:50.819656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.043 [2024-10-13 20:06:50.819712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.043 [2024-10-13 20:06:50.819733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.043 [2024-10-13 20:06:50.823553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.043 [2024-10-13 20:06:50.832611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.043 [2024-10-13 20:06:50.833327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.043 [2024-10-13 20:06:50.833386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.043 [2024-10-13 20:06:50.833424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.043 [2024-10-13 20:06:50.833718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.043 [2024-10-13 20:06:50.833994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.043 [2024-10-13 20:06:50.834023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.043 [2024-10-13 20:06:50.834047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.043 [2024-10-13 20:06:50.837839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.043 [2024-10-13 20:06:50.846766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.043 [2024-10-13 20:06:50.847250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.043 [2024-10-13 20:06:50.847298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.043 [2024-10-13 20:06:50.847324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.043 [2024-10-13 20:06:50.847599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.043 [2024-10-13 20:06:50.847885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.043 [2024-10-13 20:06:50.847912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.043 [2024-10-13 20:06:50.847931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.043 [2024-10-13 20:06:50.851679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.302 [2024-10-13 20:06:50.861030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.302 [2024-10-13 20:06:50.861448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.302 [2024-10-13 20:06:50.861494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.302 [2024-10-13 20:06:50.861519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.302 [2024-10-13 20:06:50.861821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.302 [2024-10-13 20:06:50.862124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.302 [2024-10-13 20:06:50.862158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.302 [2024-10-13 20:06:50.862179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.302 [2024-10-13 20:06:50.865936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.302 [2024-10-13 20:06:50.874991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.302 [2024-10-13 20:06:50.875469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.302 [2024-10-13 20:06:50.875516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.302 [2024-10-13 20:06:50.875541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.302 [2024-10-13 20:06:50.875825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.302 [2024-10-13 20:06:50.876068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.302 [2024-10-13 20:06:50.876095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.302 [2024-10-13 20:06:50.876115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.302 [2024-10-13 20:06:50.879779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.302 Malloc0 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.302 [2024-10-13 20:06:50.889083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.302 [2024-10-13 20:06:50.889528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.302 [2024-10-13 20:06:50.889576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.302 [2024-10-13 20:06:50.889601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.302 [2024-10-13 20:06:50.889873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.302 [2024-10-13 20:06:50.890122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.302 [2024-10-13 20:06:50.890150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.302 [2024-10-13 20:06:50.890170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.302 [2024-10-13 20:06:50.893944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.302 [2024-10-13 20:06:50.903244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.302 [2024-10-13 20:06:50.903695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.302 [2024-10-13 20:06:50.903744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:01.302 [2024-10-13 20:06:50.903773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:01.302 [2024-10-13 20:06:50.904044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.302 [2024-10-13 20:06:50.904317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.302 [2024-10-13 20:06:50.904345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.302 [2024-10-13 20:06:50.904366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.302 [2024-10-13 20:06:50.908192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.302 [2024-10-13 20:06:50.908386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.302 20:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3155926 00:37:01.302 [2024-10-13 20:06:50.917272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.302 [2024-10-13 20:06:50.949043] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:02.808 2461.71 IOPS, 9.62 MiB/s [2024-10-13T18:06:53.559Z] 2957.38 IOPS, 11.55 MiB/s [2024-10-13T18:06:54.938Z] 3345.33 IOPS, 13.07 MiB/s [2024-10-13T18:06:55.873Z] 3635.40 IOPS, 14.20 MiB/s [2024-10-13T18:06:56.808Z] 3886.55 IOPS, 15.18 MiB/s [2024-10-13T18:06:57.746Z] 4090.50 IOPS, 15.98 MiB/s [2024-10-13T18:06:58.679Z] 4270.85 IOPS, 16.68 MiB/s [2024-10-13T18:06:59.618Z] 4415.64 IOPS, 17.25 MiB/s [2024-10-13T18:06:59.618Z] 4534.13 IOPS, 17.71 MiB/s 00:37:09.803 Latency(us) 00:37:09.803 [2024-10-13T18:06:59.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.803 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:09.803 Verification LBA range: start 0x0 length 0x4000 00:37:09.803 Nvme1n1 : 15.02 4537.51 17.72 9128.09 0.00 9337.96 1134.74 40583.77 00:37:09.803 [2024-10-13T18:06:59.618Z] =================================================================================================================== 00:37:09.803 [2024-10-13T18:06:59.618Z] Total : 4537.51 17.72 9128.09 0.00 9337.96 1134.74 40583.77 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:10.739 rmmod nvme_tcp 00:37:10.739 rmmod nvme_fabrics 00:37:10.739 rmmod nvme_keyring 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3156706 ']' 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3156706 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3156706 ']' 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3156706 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3156706 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3156706' 00:37:10.739 killing process with pid 3156706 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3156706 00:37:10.739 20:07:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3156706 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.117 20:07:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:14.020 20:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:14.020 00:37:14.020 real 0m26.306s 00:37:14.020 user 1m12.570s 00:37:14.020 sys 0m4.575s 00:37:14.020 20:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:14.020 20:07:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.020 ************************************ 00:37:14.020 END TEST nvmf_bdevperf 00:37:14.020 ************************************ 00:37:14.020 20:07:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:14.020 20:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:14.020 20:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:14.020 20:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.020 ************************************ 00:37:14.020 START TEST nvmf_target_disconnect 00:37:14.020 ************************************ 00:37:14.020 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:14.280 * Looking for test storage... 00:37:14.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:14.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.280 --rc genhtml_branch_coverage=1 00:37:14.280 --rc genhtml_function_coverage=1 00:37:14.280 --rc genhtml_legend=1 00:37:14.280 --rc geninfo_all_blocks=1 00:37:14.280 --rc geninfo_unexecuted_blocks=1 00:37:14.280 00:37:14.280 ' 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:14.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.280 --rc genhtml_branch_coverage=1 00:37:14.280 --rc genhtml_function_coverage=1 00:37:14.280 --rc genhtml_legend=1 00:37:14.280 --rc geninfo_all_blocks=1 00:37:14.280 --rc geninfo_unexecuted_blocks=1 00:37:14.280 00:37:14.280 ' 00:37:14.280 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:14.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.280 --rc genhtml_branch_coverage=1 00:37:14.280 --rc genhtml_function_coverage=1 00:37:14.280 --rc genhtml_legend=1 00:37:14.281 --rc geninfo_all_blocks=1 00:37:14.281 --rc geninfo_unexecuted_blocks=1 00:37:14.281 00:37:14.281 ' 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:14.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.281 --rc genhtml_branch_coverage=1 00:37:14.281 --rc genhtml_function_coverage=1 00:37:14.281 --rc genhtml_legend=1 00:37:14.281 --rc geninfo_all_blocks=1 00:37:14.281 --rc geninfo_unexecuted_blocks=1 00:37:14.281 00:37:14.281 ' 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:14.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:14.281 20:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:16.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:16.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:16.234 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:16.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:16.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:16.235 20:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:16.235 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:16.235 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:16.235 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:16.235 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:16.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:16.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:37:16.495 00:37:16.495 --- 10.0.0.2 ping statistics --- 00:37:16.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:16.495 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:16.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:16.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:37:16.495 00:37:16.495 --- 10.0.0.1 ping statistics --- 00:37:16.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:16.495 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:16.495 ************************************ 00:37:16.495 START TEST nvmf_target_disconnect_tc1 00:37:16.495 ************************************ 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:16.495 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:16.754 [2024-10-13 20:07:06.351155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.754 [2024-10-13 20:07:06.351274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:16.754 [2024-10-13 20:07:06.351371] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:16.754 [2024-10-13 20:07:06.351429] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:16.754 [2024-10-13 20:07:06.351461] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:16.754 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:16.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:16.754 Initializing NVMe Controllers 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:16.754 00:37:16.754 real 0m0.247s 00:37:16.754 user 0m0.096s 00:37:16.754 sys 0m0.150s 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:16.754 ************************************ 00:37:16.754 END TEST nvmf_target_disconnect_tc1 00:37:16.754 ************************************ 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:16.754 ************************************ 00:37:16.754 START TEST nvmf_target_disconnect_tc2 00:37:16.754 ************************************ 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3160013 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3160013 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3160013 ']' 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:16.754 20:07:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.754 [2024-10-13 20:07:06.547627] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:37:16.754 [2024-10-13 20:07:06.547784] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.014 [2024-10-13 20:07:06.687942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:17.014 [2024-10-13 20:07:06.811553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.014 [2024-10-13 20:07:06.811632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.014 [2024-10-13 20:07:06.811654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.014 [2024-10-13 20:07:06.811675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.014 [2024-10-13 20:07:06.811692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.014 [2024-10-13 20:07:06.814261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:17.014 [2024-10-13 20:07:06.814326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:17.014 [2024-10-13 20:07:06.814371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:17.014 [2024-10-13 20:07:06.814391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:17.950 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:17.950 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:17.950 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:17.950 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:17.950 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.950 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:17.950 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.951 Malloc0 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.951 [2024-10-13 20:07:07.623201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.951 [2024-10-13 20:07:07.653586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3160174 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:17.951 20:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:19.853 20:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3160013 00:37:19.854 20:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Write completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Write completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Write completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Write completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Write completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Write completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Write completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Write completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.130 Read completed with error (sct=0, sc=8) 00:37:20.130 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 [2024-10-13 20:07:09.692798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 [2024-10-13 20:07:09.693438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 [2024-10-13 20:07:09.694027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Write completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 Read completed with error (sct=0, sc=8) 00:37:20.131 starting I/O failed 00:37:20.131 [2024-10-13 20:07:09.694578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:20.131 [2024-10-13 20:07:09.694830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-10-13 20:07:09.694885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-10-13 20:07:09.695068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-10-13 20:07:09.695103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-10-13 20:07:09.695406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.695480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.695598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.695632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.695861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.695899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.696080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.696145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.696344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.696388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.696525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.696561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.696692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.696742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.696964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.697016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.697204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.697237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.697373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.697427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.697542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.697577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.697714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.697779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.697956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.697993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.698164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.698200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.698339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.698382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.698539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.698588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.698798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.698852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.698995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.699049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.699222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.699275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.699421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.699473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.699628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.699662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.699814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.699864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.699986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.700021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.700190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.700224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.700362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.700413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.700520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.700554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.700717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.700751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.700900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.700953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.701138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.701172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.701295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.701328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.701469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.701504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.701630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.701663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.701784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.701817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.701934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.701968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.702123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.702161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.702338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.702383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.702498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.702532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.702646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.702684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.702850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.702884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.702993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-10-13 20:07:09.703045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-10-13 20:07:09.703203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.703256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.703441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.703476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.703590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.703625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.703812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.703854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.704051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.704088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.704200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.704237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.704390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.704437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.704570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.704604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.704816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.704867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.704980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.705017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.705212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.705250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.705402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.705440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.705564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.705598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.705706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.705740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.705846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.705880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.706076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.706110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.706304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.706342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.706494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.706529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.706674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.706708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.706925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.706958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.707065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.707098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.707245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.707283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.707436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.707471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.707605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.707638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.707807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.707858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.708037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.708075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.708263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.708297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.708408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.708442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.708578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.708611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.708766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.708803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.708960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.708998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.709151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.709185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.709310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.709343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.709517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.709566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.709709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.709758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.709891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.709927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.710115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.710174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.710281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.710316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.710456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.710490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-10-13 20:07:09.710619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-10-13 20:07:09.710653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.710763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.710798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.710955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.710989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.711113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.711147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.711281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.711326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.711491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.711526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.711684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.711718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.711850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.711884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.712023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.712056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.712191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.712227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.712375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.712446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.712583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.712618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.712803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.712883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.713020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.713054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.713145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.713179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.713308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.713344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.713494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.713528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.713639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.713694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.713833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.713868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.714040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.714090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.714235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.714273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.714462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.714511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.714665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.714728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.714876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.714927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.715053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.715087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.715230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.715265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.715384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.715426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.715558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.715593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.715752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.715786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.715896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.715931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.716036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.716072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.716206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.716241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.716371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.716414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.716559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.716595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.716753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.716801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.716906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.716942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.717080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.717114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.717246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.717279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.717450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.717485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.717608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.717644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-10-13 20:07:09.717808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-10-13 20:07:09.717843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.718005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.718038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.718141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.718175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.718319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.718353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.718491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.718531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.718683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.718717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.718806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.718839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.718998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.719032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.719135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.719169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.719328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.719362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.719472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.719506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.719649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.719683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.719784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.719835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.719945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.719982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.720129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.720167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.720330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.720364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.720507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.720542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.720641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.720676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.720816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.720850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.720978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.721012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.721104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.721138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.721271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.721304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.721437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.721472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.721590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.721653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.721794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.721831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.721992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.722030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.722173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.722210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.722354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.722388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.722553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.722587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.722777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.722813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.722924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.722962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.723117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.723154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.723302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.723338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.723500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-10-13 20:07:09.723535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-10-13 20:07:09.723712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.723749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.723940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.723995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.724132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.724184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.724332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.724371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.724572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.724621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.724783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.724824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.724945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.724984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.725128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.725166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.725359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.725422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.725544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.725583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.725749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.725793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.725944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.725982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.726093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.726130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.726301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.726353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.726519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.726555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.726737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.726775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.726894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.726928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.727034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.727067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.727191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.727229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.727381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.727424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.727531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.727565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.727700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.727734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.727903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.727956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.728093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.728131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.728266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.728303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.728430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.728464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.728575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.728609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.728745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.728779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.729007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.729044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.729190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.729227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.729374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.729417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.729598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.729632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.729727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.729779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.729925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.729962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.730123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.730160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.730314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.730363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.730544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.730579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.730733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.730782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.730905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.730943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.731092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.731145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-10-13 20:07:09.731312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-10-13 20:07:09.731347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.731503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.731553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.731697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.731734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.731833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.731867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.732034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.732068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.732228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.732265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.732386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.732467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.732604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.732637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.732812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.732850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.733020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.733054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.733187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.733226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.733347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.733380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.733532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.733567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.733675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.733710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.733862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.733896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.734076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.734119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.734304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.734341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.734497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.734530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.734661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.734713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.734835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.734869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.735036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.735070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.735169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.735203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.735358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.735391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.735527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.735559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.735712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.735750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.735956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.735992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.736176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.736214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.736373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.736421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.736576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.736610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.736712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.736748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.736852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.736886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.737067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.737104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.737221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.737270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.737445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.737479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.737607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.737640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.737813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.737850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.737961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.737998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.738205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.738241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.738401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.738435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.738567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-10-13 20:07:09.738600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-10-13 20:07:09.738702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.738735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.738828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.738861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.739016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.739053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.739231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.739268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.739421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.739471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.739607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.739640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.739806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.739839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.739951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.739985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.740121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.740154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.740297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.740334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.740501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.740539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.740708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.740746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.740894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.740928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.741082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.741115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.741256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.741293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.741446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.741480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.741644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.741694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.741842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.741878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.742078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.742115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.742270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.742304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.742411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.742445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.742571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.742620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.742770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.742806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.742936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.742971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.743087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.743121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.743276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.743310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.743428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.743464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.743603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.743638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.743769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.743803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.743961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.743995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.744167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.744229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.744342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.744380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.744514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.744547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.744709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.744746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.744916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.744953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.745091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.745128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.745284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.745319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.745461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.745496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.745626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.745663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.745830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-10-13 20:07:09.745882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-10-13 20:07:09.746080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.746114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.746249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.746283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.746415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.746471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.746620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.746657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.746787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.746824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.747111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.747148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.747285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.747322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.747496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.747531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.747658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.747700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.747839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.747876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.748018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.748060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.748187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.748221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.748353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.748387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.748498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.748531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.748709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.748746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.748877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.748912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.749005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.749038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.749172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.749209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.749340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.749375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.749527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.749562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.749700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.749734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.749830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.749864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.749996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.750031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.750162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.750197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.750334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.750367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.750509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.750544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.750699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.750733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.750861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.750894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.751003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.751040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.751157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.751196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.751344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.751377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.751485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.751519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.751642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.751676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.751835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.751872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-10-13 20:07:09.752012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-10-13 20:07:09.752049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.752248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.752285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.752425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.752472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.752629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.752680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.752775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.752808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.752919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.752954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.753092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.753126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.753289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.753323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.753459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.753493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.753599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.753632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.753820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.753902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.754053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.754091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.754239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.754273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.754408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.754443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.754592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.754644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.754796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.754850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.755000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.755058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.755162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.755197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.755291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.755325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.755473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.755508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.755641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.755675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.755775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.755808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.755940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.755973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.756131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.756165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.756292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.756325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.756438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.756474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.756572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.756605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.756760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.756794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.756913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.756965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.757098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.757133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.757270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.757304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.757425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.757460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.757594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.757627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.757755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.757789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.757888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.757921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.758046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.758079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.758239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.758272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.758405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.758439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.758561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.758594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.758729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-10-13 20:07:09.758762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-10-13 20:07:09.758887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.758941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.759098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.759133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.759262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.759297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.759410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.759463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.759582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.759620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.759757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.759794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.759942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.759976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.760102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.760136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.760304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.760342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.760530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.760563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.760684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.760721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.760867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.760903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.761114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.761148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.761278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.761311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.761465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.761498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.761628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.761662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.761806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.761865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.762003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.762039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.762238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.762274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.762419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.762470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.762605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.762639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.762786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.762822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.762942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.762974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.763102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.763139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.763263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.763315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.763412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.763463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.763618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.763651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.763847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.763880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.764024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.764061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.764201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.764238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.764400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.764434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.764565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.764598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.764760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.764794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.764978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.765014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.765156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.765194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.765368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.765413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.765566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.765616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.765762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.765799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.765903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.765939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.766128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-10-13 20:07:09.766180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-10-13 20:07:09.766306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.766340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.766461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.766496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.766628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.766663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.766795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.766829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.766934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.766967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.767092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.767125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.767266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.767301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.767497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.767531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.767674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.767727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.767891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.767942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.768102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.768136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.768267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.768301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.768426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.768461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.768572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.768608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.768763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.768797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.768928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.768962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.769120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.769159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.769290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.769324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.769440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.769475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.769612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.769647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.769802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.769836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.769998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.770032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.770157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.770191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.770331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.770367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.770551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.770588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.770705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.770742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.770892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.770925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.771055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.771088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.771242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.771275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.771422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.771457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.771617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.771650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.771821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.771870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.772050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.772087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.772209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.772243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.772367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.772407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.772537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.772571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.772702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.772734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.772890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.772940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.773064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.773097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.773224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-10-13 20:07:09.773257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-10-13 20:07:09.773416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.773468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.773596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.773629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.773750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.773802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.773941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.773978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.774216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.774250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.774388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.774427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.774549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.774583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.774696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.774744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.774910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.774947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.775117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.775153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.775311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.775345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.775459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.775495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.775670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.775722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.775920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.775954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.776083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.776116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.776276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.776310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.776455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.776499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.776617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.776667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.776810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.776844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.776998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.777034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.777174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.777211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.777379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.777448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.777576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.777614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.777727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.777763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.777931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.777968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.778091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.778127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.778290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.778330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.778482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.778537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.778749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.778804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.778983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.779035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.779202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.779236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.779371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.779413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.779553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.779587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.779746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.779782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.779975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.780037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.780178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.780215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.780351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.780388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-10-13 20:07:09.780568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-10-13 20:07:09.780626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.780778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.780829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.780968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.781019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.781168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.781202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.781360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.781405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.781510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.781544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.781716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.781751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.781884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.781918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.782041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.782074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.782192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.782225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.782386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.782425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.782527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.782560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.782681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.782718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.782859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.782896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.783018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.783055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.783184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.783220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.783360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.783401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.783509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.783554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.783647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.783681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.783827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.783887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.784028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.784067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.784200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.784234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.784365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.784404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.784534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.784568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.784718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.784755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.784923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.784960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.785075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.785112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.785272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.785306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.785433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.785467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.785600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.785633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.785763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.785797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.785957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.785994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.786132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.786169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.786357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.786398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.786539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.786573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.786690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.786728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.786922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.786972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.787127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-10-13 20:07:09.787180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-10-13 20:07:09.787320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.787353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.787518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.787553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.787648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.787700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.787876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.787927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.788059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.788093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.788296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.788334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.788463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.788497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.788629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.788662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.788862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.788917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.789067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.789107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.789280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.789316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.789450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.789486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.789611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.789644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.789759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.789797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.789939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.789976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.790119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.790169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.790314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.790347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.790511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.790561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.790729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.790766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.790875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.790910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.791073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.791112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.791251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.791299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.791450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.791485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.791616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.791649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.791786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.791820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.791929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.791963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.792148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.792186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.792335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.792372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.792509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.792542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.792674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.792707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.792895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.792932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.793135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.793172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.793290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.793329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.793470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.793505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.793639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.793674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.793860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.793898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.794068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.794107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.794311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.794349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.794552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.794601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.794729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.794778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-10-13 20:07:09.794942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-10-13 20:07:09.794979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.795206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.795266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.795419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.795470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.795599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.795633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.795760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.795811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.796012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.796083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.796216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.796269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.796442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.796476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.796600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.796645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.796830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.796865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.797038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.797076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.797236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.797269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.797427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.797460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.797585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.797619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.797770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.797807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.798000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.798038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.798174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.798208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.798334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.798373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.798516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.798549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.798708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.798757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.798938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.798978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.799103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.799155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.799336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.799375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.799516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.799551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.799683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.799732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.799921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.799971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.800117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.800150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.800340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.800377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.800506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.800539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.800636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.800670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.800808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.800841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.800973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.801012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.801112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.801149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.801262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.801302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.801480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.801520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.801702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.801750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.801884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.801920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.802058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.802092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.802328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.802365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.802533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.802569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-10-13 20:07:09.802706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-10-13 20:07:09.802761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.802968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.803069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.803292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.803331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.803477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.803514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.803670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.803704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.803835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.803869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.804077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.804112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.804267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.804302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.804477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.804530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.804686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.804724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.804866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.804900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.805027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.805061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.805276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.805338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.805509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.805543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.805724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.805761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.805948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.806025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.806147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.806181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.806309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.806352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.806499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.806534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.806685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.806734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.806925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.806986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.807149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.807190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.807371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.807417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.807603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.807637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.807789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.807826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.807962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.807999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.808121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.808173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.808289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.808327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.808468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.808504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.808599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.808633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.808768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.808802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.808906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.808940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.809101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.809134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.809241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.809275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.809469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.809504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.809641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.809675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.809771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.809821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.810012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.810047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.810201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.810234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.810392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.810451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-10-13 20:07:09.810591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-10-13 20:07:09.810625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.810839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.810873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.811007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.811040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.811197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.811231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.811408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.811443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.811579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.811613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.811717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.811751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.811936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.811973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.812112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.812154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.812302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.812355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.812487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.812536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.812678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.812715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.812825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.812860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.813010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.813062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.813252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.813305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.813415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.813450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.813560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.813594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.813729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.813763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.813889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.813922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.814084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.814118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.814241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.814274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.814378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.814423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.814567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.814603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.814760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.814812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.814971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.815005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.815110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.815144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.815251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.815286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.815407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.815458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.815578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.815612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.815776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.815810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.815968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.816001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.816128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.816161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.816293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.816327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.816450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.816485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.816619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-10-13 20:07:09.816667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-10-13 20:07:09.816841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.816882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.817030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.817068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.817198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.817250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.817447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.817482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.817598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.817633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.817812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.817849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.817993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.818043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.818204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.818238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.818385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.818441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.818557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.818594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.818747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.818801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.818940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.818993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.819094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.819128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.819300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.819343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.819483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.819517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.819622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.819675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.819816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.819854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.819990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.820027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.820173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.820210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.820387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.820447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.820584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.820632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.820787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.820827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.820982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.821021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.821170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.821224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.821375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.821420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.821570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.821605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.821771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.821837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.821986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.822042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.822171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.822212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.822369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.822418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.822569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.822602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.822754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.822791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.822961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.822998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.823126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.823160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.823308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.823341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.823517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.823552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.823718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.823751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.823883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.823917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.824015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.824049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.824204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-10-13 20:07:09.824238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-10-13 20:07:09.824379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.824420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.824566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.824601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.824726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.824759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.824890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.824926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.825032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.825071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.825218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.825255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.825381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.825423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.825569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.825603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.825783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.825820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.825953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.826005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.826146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.826185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.826362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.826424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.826518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.826551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.826683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.826723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.826881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.826915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.827044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.827078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.827187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.827221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.827370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.827411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.827545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.827579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.827733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.827766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.827894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.827931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.828047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.828097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.828231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.828267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.828413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.828446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.828588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.828622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.828779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.828812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.828957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.828993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.829169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.829219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.829355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.829390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.829527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.829560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.829666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.829700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.829835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.829869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.830009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.830042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.830196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.830229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.830376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.830421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.830550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.830584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.830707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.830758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.830857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.830894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.831011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.831050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.831276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-10-13 20:07:09.831341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-10-13 20:07:09.831525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.831560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.831719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.831753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.831903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.831940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.832120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.832170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.832332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.832365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.832483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.832518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.832644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.832693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.832867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.832904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.833058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.833160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.833319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.833353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.833495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.833530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.833657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.833695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.833888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.833922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.834080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.834134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.834327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.834361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.834494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.834554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.834748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.834788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.834923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.834960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.835111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.835149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.835260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.835297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.835483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.835533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.835667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.835707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.835876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.835928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.836047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.836099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.836205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.836240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.836408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.836443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.836571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.836605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.836765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.836813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.836927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.836963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.837077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.837111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.837284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.837321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.837463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.837509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.837652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.837722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.837934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.837992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.838164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.838201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.838343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.838380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.838548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.838581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.838738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.838772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.838931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.838969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.839149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-10-13 20:07:09.839186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-10-13 20:07:09.839340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.839390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.839501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.839535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.839655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.839704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.839899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.839955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.840089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.840123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.840228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.840262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.840364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.840409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.840550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.840584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.840708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.840742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.840871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.840904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.841012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.841046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.841182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.841217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.841354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.841387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.841524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.841565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.841723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.841756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.841895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.841929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.842058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.842091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.842199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.842234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.842371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.842418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.842552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.842600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.842712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.842753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.842909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.842943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.843102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.843139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.843280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.843316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.843473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.843507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.843659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.843696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.843848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.843885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.844042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.844077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.844204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.844239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.844407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.844456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.844603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.844641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.844855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.844919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.845171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.845231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.845358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.845403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.845573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.845608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-10-13 20:07:09.845804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-10-13 20:07:09.845856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.845996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.846033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.846178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.846215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.846391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.846446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.846624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.846662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.846941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.846994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.847241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.847301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.847482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.847519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.847627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.847661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.847810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.847863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.848008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.848046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.848220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.848257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.848390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.848454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.848610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.848644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.848750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.848801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.848942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.848980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.849121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.849158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.849292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.849329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.849517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.849571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.849760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.849799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.849947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.849984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.850156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.850194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.850366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.850428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.850583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.850632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.850769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.850808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.850956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.850994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.851135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.851173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.851284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.851334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.851465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.851499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.851598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.851633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.851796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.851829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.851977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.852014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.852192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.852251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.852387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.852429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.852528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.852561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.852718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.852756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.852872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.852925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.853066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.853104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.853244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.853280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.853464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.853513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-10-13 20:07:09.853681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-10-13 20:07:09.853717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.853880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.853978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.854113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.854159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.854268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.854302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.854466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.854501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.854641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.854676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.854778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.854813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.854952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.854986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.855113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.855147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.855307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.855341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.855483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.855519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.855712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.855766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.855941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.855993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.856185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.856219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.856374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.856414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.856547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.856582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.856732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.856783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.856982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.857055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.857201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.857248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.857409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.857461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.857619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.857655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.857790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.857827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.857959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.857996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.858154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.858207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.858329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.858363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.858541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.858589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.858712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.858747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.858882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.858916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.859021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.859054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.859245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.859278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.859417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.859452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.859556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.859589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.859723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.859773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.859910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.859947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.860163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.860200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.860367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.860417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.860562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.860595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.860727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.860760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.860943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.860980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.861094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.861131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-10-13 20:07:09.861297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-10-13 20:07:09.861334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.861502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.861542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.861717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.861754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.862007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.862066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.862273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.862334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.862513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.862549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.862683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.862717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.862850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.862884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.863040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.863077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.863281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.863318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.863482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.863531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.863674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.863728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.863907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.863945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.864084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.864121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.864268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.864305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.864450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.864485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.864617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.864650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.864772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.864805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.864977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.865020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.865170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.865207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.865315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.865353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.865538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.865587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.865741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.865776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.865953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.866004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.866197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.866254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.866375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.866415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.866575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.866608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.866728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.866766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.866897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.866948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.867121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.867159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.867328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.867365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.867504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.867552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.867716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.867752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.867860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.867895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.868028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.868063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.868222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.868256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.868389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.868434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.868537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.868572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.868729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.868781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.868962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.869014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.869121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.155 [2024-10-13 20:07:09.869157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.155 qpair failed and we were unable to recover it. 00:37:20.155 [2024-10-13 20:07:09.869315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.869349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.869500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.869534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.869720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.869785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.869915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.869954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.870073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.870110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.870269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.870302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.870426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.870460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.870594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.870629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.870783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.870838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.871015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.871070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.871207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.871241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.871403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.871438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.871563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.871616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.871758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.871791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.871892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.871927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.872063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.872098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.872228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.872262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.872402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.872441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.872599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.872632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.872761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.872795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.872930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.872984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.873124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.873177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.873274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.873308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.873459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.873513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.873626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.873662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.873818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.873852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.873996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.874030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.874161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.874195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.874304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.874338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.874482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.874516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.874622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.874656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.874787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.874827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.874950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.875003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.875146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.875198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.875299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.875333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.875488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.875541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.875668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.875720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.156 qpair failed and we were unable to recover it. 00:37:20.156 [2024-10-13 20:07:09.875850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.156 [2024-10-13 20:07:09.875885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.875989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.876024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.876165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.876199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.876356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.876389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.876524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.876557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.876696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.876730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.876870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.876904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.877070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.877104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.877263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.877297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.877466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.877520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.877643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.877696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.877853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.877891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.878085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.878138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.878296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.878330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.878494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.878528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.878692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.878747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.878877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.878911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.879064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.879098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.879228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.879262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.879401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.879436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.879591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.879632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.879759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.879792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.879890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.879924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.880028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.880061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.880198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.880232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.880409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.880451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.880609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.880643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.880839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.880873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.881005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.881039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.881176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.881211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.881347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.881381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.881518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.881557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.881676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.881713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.881849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.881886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.882025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.882075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.882231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.882264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.882406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.882440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.882622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.157 [2024-10-13 20:07:09.882676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.157 qpair failed and we were unable to recover it. 00:37:20.157 [2024-10-13 20:07:09.882851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.882902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.883048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.883100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.883260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.883295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.883425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.883459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.883563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.883597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.883760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.883798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.883942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.883979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.884124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.884161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.884343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.884378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.884531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.884581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.884748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.884785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.884891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.884927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.885079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.885117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.885267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.885306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.885438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.885474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.885651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.885699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.885810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.885847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.886016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.886068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.886229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.886274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.886416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.886452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.886588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.886641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.886780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.886814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.886934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.886974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.887128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.887167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.887335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.887372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.887525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.887559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.887719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.887753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.887890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.887925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.888072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.888120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.888289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.888324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.888528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.888576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.888721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.888773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.888962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.889000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.889222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.889260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.889414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.889469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.889608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.889642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.889862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.889900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.890024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.890061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.890205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.890244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.890410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.158 [2024-10-13 20:07:09.890445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.158 qpair failed and we were unable to recover it. 00:37:20.158 [2024-10-13 20:07:09.890602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.890650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.890832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.890869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.891183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.891253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.891380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.891443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.891556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.891590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.891715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.891748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.891880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.891933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.892076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.892114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.892300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.892337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.892550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.892600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.892760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.892801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.892958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.892997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.893137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.893175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.893300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.893354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.893534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.893573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.893669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.893721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.893869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.893907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.894144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.894197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.894344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.894381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.894520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.894554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.894683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.894717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.894817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.894851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.894978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.895012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.895195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.895233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.895427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.895493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.895629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.895697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.895887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.895924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.896211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.896280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.896468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.896514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.896623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.896657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.896794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.896829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.896983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.897035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.897229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.897273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.897421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.897472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.897627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.897694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.897859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.897896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.898012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.898066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.898276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.898338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.898499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.898533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.898648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.159 [2024-10-13 20:07:09.898703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.159 qpair failed and we were unable to recover it. 00:37:20.159 [2024-10-13 20:07:09.898917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.898983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.899162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.899237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.899384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.899428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.899602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.899636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.899795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.899829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.899956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.900008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.900162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.900199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.900352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.900385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.900502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.900536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.900633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.900672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.900824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.900859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.900965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.901029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.901177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.901214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.901372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.901413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.901575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.901609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.901751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.901788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.901914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.901947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.902079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.902113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.902289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.902326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.902439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.902473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.902562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.902595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.902767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.902821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.903009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.903046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.903185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.903220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.903379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.903421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.903584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.903618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.903788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.903826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.904028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.904085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.904260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.904295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.904478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.904517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.904689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.904727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.904905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.904939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.905080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.905118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.905291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.905329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.905462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.905497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.905662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.905716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.905917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.905952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.906057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.906092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.906259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.906293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.906432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.160 [2024-10-13 20:07:09.906483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.160 qpair failed and we were unable to recover it. 00:37:20.160 [2024-10-13 20:07:09.906588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.906621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.906747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.906781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.906968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.907001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.907143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.907177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.907312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.907364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.907528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.907564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.907705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.907740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.907854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.907906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.908063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.908097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.908253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.908293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.908472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.908527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.908688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.908722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.908898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.908932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.909090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.909124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.909283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.909323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.909495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.909530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.909655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.909688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.909861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.909933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.910047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.910081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.910209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.910243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.910408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.910446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.910625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.910659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.910839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.910877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.910998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.911034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.911184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.911217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.911352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.911409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.911533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.911572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.911758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.911791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.911919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.911953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.912072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.912106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.912235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.912274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.912418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.912470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.912601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.912635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.912798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.912831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.912987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.913026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.913164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.161 [2024-10-13 20:07:09.913202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.161 qpair failed and we were unable to recover it. 00:37:20.161 [2024-10-13 20:07:09.913386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.913425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.913581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.913618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.913738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.913775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.913930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.913963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.914071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.914105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.914230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.914263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.914365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.914406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.914509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.914542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.914719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.914755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.914897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.914931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.915065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.915116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.915249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.915286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.915441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.915476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.915572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.915611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.915729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.915766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.915877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.915911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.916065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.916098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.916264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.916300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.916461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.916496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.916598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.916632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.916742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.916776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.916903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.916936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.917041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.917076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.917260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.917309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.917427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.917461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.917568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.917602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.917795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.917829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.917970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.918005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.918125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.918176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.918338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.918371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.918537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.918572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.918748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.918785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.918962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.918999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.919136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.919170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.919313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.919364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.919495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.919529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.919633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.919667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.919790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.919823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.919994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.920027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.920159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.162 [2024-10-13 20:07:09.920194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.162 qpair failed and we were unable to recover it. 00:37:20.162 [2024-10-13 20:07:09.920337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.920388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.920605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.920660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.920826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.920863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.921001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.921055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.921247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.921298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.921442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.921477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.921584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.921618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.921815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.921849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.922006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.922040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.922197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.922236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.922343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.922380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.922574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.922608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.922744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.922778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.922911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.922951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.923134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.923178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.923327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.923361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.923501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.923535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.923680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.923713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.923912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.923950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.924172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.924229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.924362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.924403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.924539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.924585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.924720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.924754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.924887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.924922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.925057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.925090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.925184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.925217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.925344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.925378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.925494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.925538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.925674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.925708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.925804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.925838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.926005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.926038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.926176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.926216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.926329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.926364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.926518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.926553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.926680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.926714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.926847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.926882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.927058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.927096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.927264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.927300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.927442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.927477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.163 [2024-10-13 20:07:09.927614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.163 [2024-10-13 20:07:09.927648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.163 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.927800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.927835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.927979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.928013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.928172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.928224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.928372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.928425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.928604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.928638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.928774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.928808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.928936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.928970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.929122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.929156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.929285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.929319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.929481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.929517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.929615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.929659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.929795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-10-13 20:07:09.929829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-10-13 20:07:09.929932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.929966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.930108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.930148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.930256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.930291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.930433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.930469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.930620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.930655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.930795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.930829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.930925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.930960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.931094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.931128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.931264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.931298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.931431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.931466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.931570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.931604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.931785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.931834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.931946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.931984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.932122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.932157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.932259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.932293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.932452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.932488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.932592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.932627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.932765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.932801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.932936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.932971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.933102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.933137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.933302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.933336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.933533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.933582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.933749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.933784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.933948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.934022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.934240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.934277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.934470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.934505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.934654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.934690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.934873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.934911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.935047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.935081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.935184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.935217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.935339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.935376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.935536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.935570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.935700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.935734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.935853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.935890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.936012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.936045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.936171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.936221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.936369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.936416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.936536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.936570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.936726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.936778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-10-13 20:07:09.936912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-10-13 20:07:09.936949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.937084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.937119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.937230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.937272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.937447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.937516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.937658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.937696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.937873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.937912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.938085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.938122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.938274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.938309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.938463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.938498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.938612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.938648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.938842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.938876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.939020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.939071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.939239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.939275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.939408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.939442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.939551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.939595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.939725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.939758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.939903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.939936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.940044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.940098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.940243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.940281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.940413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.940447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.940588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.940622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.940844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.940877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.940989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.941023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.941154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.941189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.941320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.941371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.941499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.941533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.941631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.941670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.941898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.941932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.942061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.942095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.942233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.942268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.942477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.942514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.942673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.942715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.942851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.942889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.942997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.943034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.943197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.943242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.943413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.943453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.943627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.943661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.943757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.943801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.943941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.943993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.944105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.944145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.944308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-10-13 20:07:09.944342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-10-13 20:07:09.944498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.944533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.944759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.944820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.945000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.945033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.945198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.945232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.945410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.945464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.945575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.945609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.945771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.945821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.945972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.946015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.946195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.946231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.946385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.946447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.946571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.946606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.946769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.946815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.946951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.947007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.947269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.947305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.947454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.947490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.947628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.947678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.947878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.947923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.948050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.948085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.948214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.948248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.948366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.948424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.948580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.948618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.948793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.948828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.948992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.949027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.949127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.949172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.949280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.949315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.949491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.949540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.949713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.949750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.949884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.949919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.950063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.950098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.950261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.950296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.950433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.950467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.950593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.950626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.950775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.950808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.952332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.952379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.952558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.952593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.952751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.952786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.952932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.952970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.953106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.953140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.953244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.953277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.953385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-10-13 20:07:09.953425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-10-13 20:07:09.953529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.953562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.953706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.953745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.953860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.953905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.954012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.954045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.954268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.954302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.954445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.954480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.954634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.954684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.954853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.954889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.955031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.955066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.955219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.955254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.955368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.955407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.955512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.955546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.955648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.955682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.955783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.955817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.955938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.955975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.956087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.956130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.956262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.956296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.956432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.956466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.956571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.956605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.956710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.956744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.956858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.956903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.957020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.957057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.957204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.957238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.957361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.957409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.957512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.957546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.957679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.957713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.957842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.957895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.958030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.958068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.958222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.958255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.958347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.958381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.958498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.958532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.958667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.958712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.958809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.958843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.958968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.959004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.959144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.959179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.959316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.959359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-10-13 20:07:09.959468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-10-13 20:07:09.959502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.959637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.959671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.959783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.959817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.960822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.960861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.961013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.961056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.961213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.961253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.961375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.961416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.961529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.961562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.961696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.961747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.961968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.962022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.962136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.962171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.962292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.962328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.962481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.962517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.962649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.962693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.962817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.962852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.962996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.963032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.963166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.963201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.963338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.963373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.963497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.963535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.963678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.963721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.963845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.963880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.964089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.964138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.964278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.964317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.964449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.964484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.964653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.964690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.964804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.964836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.964973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.965013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.965125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.965158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.965305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.965347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.965493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.965542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.965673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.965709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.965853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.965896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.966082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.966120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.966237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.966270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.966377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.966417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.966528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.966562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.966686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.966720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.966843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.966887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-10-13 20:07:09.967023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-10-13 20:07:09.967056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.967238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.967294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.967443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.967481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.967644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.967697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.967839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.967896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.968026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.968061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.968193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.968227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.968364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.968420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.968527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.968561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.968669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.968703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.968849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.968886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.969009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.969054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.969213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.969266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.969403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.969436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.969526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.969560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.969652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.969686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.969790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.969824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.969923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.969962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.970103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.970140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.970343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.970407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.970524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.970560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.970730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.970785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.970958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.970997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.971144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.971181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.971301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.971334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.971488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.971523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.971642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.971696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.971836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.971872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.972021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.972055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.972199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.972254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.972387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.972446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.972580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.972614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.972719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.972758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.972860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.972894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.973011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.973059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.973205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.973241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.973427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.973495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.973616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.973652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.973793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.973830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.973964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.973998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.974129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-10-13 20:07:09.974165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-10-13 20:07:09.974278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.974315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.975137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.975177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.975370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.975423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.976342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.976386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.976540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.976574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.976693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.976731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.976900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.976940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.977194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.977228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.977334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.977369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.977526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.977563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.977703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.977736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.977873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.977906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.978061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.978096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.978195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.978227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.978326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.978359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.978524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.978559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.978659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.978708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.978817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.978849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.978965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.978998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.979149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.979200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.979351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.979406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.979523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.979559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.979670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.979706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.979858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.979892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.980008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.980042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.980148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.980181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.980311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.980346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.980540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.980587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.980697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.980732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.980846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.980880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.981058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.981112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.981320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.981356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.981529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.981562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.981698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.981736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.981956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.982025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.982215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.982253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.982403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.982454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.982579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.982612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.982742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.982811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.982970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-10-13 20:07:09.983007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-10-13 20:07:09.983267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.983342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.983507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.983543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.983679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.983734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.983856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.983890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.984099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.984166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.984280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.984317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.984488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.984543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.984693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.984729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.984891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.984950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.985097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.985157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.985329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.985365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.985496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.985531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.985659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.985706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.985837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.985880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.986036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.986074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.986222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.986258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.986433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.986468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.986576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.986610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.986717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.986755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.986925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.986962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.987091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.987128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.987241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.987277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.987419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.987452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.987565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.987599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.987729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.987762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.987887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.987924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.988041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.988086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.988248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.988299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.988454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.988487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.988597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.988630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.988760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.988798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.989001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.989049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.989175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.989212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.989357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.989422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.989529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.989564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.989697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.989730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.989843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.989895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.990094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.990130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.990245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.990279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-10-13 20:07:09.990414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-10-13 20:07:09.990463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.990604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.990640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.990752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.990787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.990920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.990955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.991109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.991143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.991249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.991282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.991425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.991461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.991597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.991629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.991753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.991796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.991936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.991970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.992074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.992107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.992240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.992273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.992389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.992446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.992581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.992615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.992751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.992795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.992925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.992962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.993096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.993130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.993256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.993290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.993411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.993446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.993557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.993590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.993737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.993770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.993936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.993973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.994118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.994152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.994296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.994331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.994474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.994508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.994608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.994641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.994797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.994831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.994952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.994987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.995116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.995150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.995291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.995325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.995437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.995471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.995583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.995617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.995750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.995783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.995958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.995991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.996127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.996173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.996284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-10-13 20:07:09.996317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-10-13 20:07:09.996478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.996527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.996648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.996696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.996810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.996862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.997019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.997053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.997192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.997225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.997369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.997433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.997538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.997572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.997684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.997725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.997889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.997923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.998027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.998060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.998205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.998239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.998415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.998449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.998558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.998591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.998730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.998779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.998935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.998979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.999087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.999121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.999262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.999294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.999440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.999488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.999649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.999707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:09.999879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:09.999916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.000064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.000110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.000247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.000304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.000478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.000527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.000645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.000712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.000858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.000896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.001033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.001087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.001228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.001262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.001402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.001453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.001570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.001604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.001758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.001810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.001960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.002006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.002130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.002165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.002278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.002313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.002431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.002466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.002586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.002627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.002761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.002801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.002952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.002986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.003103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.003135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.003238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.003276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.003417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.003459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.454 [2024-10-13 20:07:10.003561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.454 [2024-10-13 20:07:10.003596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.454 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.003690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.003722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.003861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.003898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.004022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.004063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.004177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.004211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.004348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.004388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.004508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.004542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.004641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.004675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.004805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.004839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.004986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.005022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.005149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.005185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.005373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.005433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.005560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.005596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.005730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.005763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.005864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.005898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.006029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.006065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.006172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.006216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.006328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.006364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.006490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.006538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.006681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.006729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.006850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.006886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.007055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.007099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.007205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.007239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.007352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.007392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.007500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.007533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.007648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.007681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.007794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.007827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.007962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.007996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.008094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.008127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.008232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.008265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.008428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.008477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.008611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.008649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.008798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.008834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.008967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.009000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.009165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.009207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.009327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.455 [2024-10-13 20:07:10.009363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.455 qpair failed and we were unable to recover it. 00:37:20.455 [2024-10-13 20:07:10.009478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.009513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.009630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.009665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.009795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.009834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.009950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.009983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.010092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.010125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.010259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.010293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.010410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.010446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.010555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.010588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.010691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.010725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.010846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.010880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.011008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.011041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.011150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.011184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.011347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.011417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.011545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.011593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.011734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.011779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.011926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.011976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.012132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.012178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.012917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.012974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.013155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.013224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.013383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.013469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.013593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.013628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.013770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.013814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.013927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.013961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.014078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.014120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.014268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.014310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.014466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.014514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.014638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.014675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.014799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.014835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.456 [2024-10-13 20:07:10.014962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.456 [2024-10-13 20:07:10.014997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.456 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.015115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.015148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.015276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.015311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.015451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.015500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.015635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.015683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.015808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.015845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.015956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.015991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.016147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.016183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.016308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.016343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.016464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.016500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.016623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.016657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.016800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.016835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.017010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.017046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.017162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.017194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.017291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.017330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.017459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.017494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.017596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.017629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.017742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.017781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.017894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.017928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.018047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.018081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.018211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.018245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.018345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.018390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.018528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.018562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.018677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.018723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.018902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.018938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.019064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.019098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.019213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.019246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.019386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.019426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.019538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.019573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.019670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.019713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.019841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.019874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.019981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.020015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.020118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.020170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.020354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.020402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.020518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.020552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.020655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.020697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.020812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.020850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.020981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.021015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.457 [2024-10-13 20:07:10.021151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.457 [2024-10-13 20:07:10.021184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.457 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.021320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.021355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.021481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.021516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.021624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.021658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.021768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.021801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.021908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.021942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.022107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.022141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.022270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.022304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.022416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.022455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.023360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.023424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.023603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.023638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.023780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.023813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.023945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.023980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.024119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.024153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.024299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.024332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.024463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.024498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.024596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.024633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.024776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.024809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.024944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.024977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.025088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.025121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.025249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.025283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.025437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.025488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.025629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.025678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.025795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.025839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.025954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.025990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.026141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.026176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.026284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.026320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.026457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.026492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.026595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.026628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.026737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.026772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.026888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.026922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.027055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.027093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.027264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:20.458 [2024-10-13 20:07:10.027482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.027520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.027619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.027665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.027807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.027841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.458 [2024-10-13 20:07:10.027952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.458 [2024-10-13 20:07:10.027986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.458 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.028099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.028132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.028263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.028297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.028410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.028446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.028562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.028597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.028719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.028768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.028892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.028927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.029085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.029126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.029240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.029275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.029424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.029459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.029586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.029634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.029757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.029793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.029907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.029942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.030052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.030086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.030190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.030224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.030328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.030362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.030500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.030534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.030670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.030705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.030814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.030849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.030951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.030985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.031102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.031135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.031308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.031343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.031492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.031541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.031672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.031729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.031875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.031930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.032178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.032215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.032368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.032426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.032560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.032595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.032741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.032795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.032912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.032947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.033107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.033142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.033248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.033282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.033417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.033466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.033590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.033639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.033806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.033868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.034002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.034035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.034144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.034177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.034279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.034312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.034451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.034487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.459 [2024-10-13 20:07:10.034589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.459 [2024-10-13 20:07:10.034624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.459 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.034823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.034857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.034973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.035007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.035190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.035238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.035357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.035409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.035517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.035551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.035658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.035692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.035801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.035834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.035937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.035977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.036100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.036136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.036271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.036304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.036449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.036483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.036587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.036620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.036749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.036783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.036893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.036926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.037062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.037096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.037207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.037240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.037345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.037381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.037509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.037542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.037647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.037680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.037809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.037844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.037960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.037995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.038128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.038162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.038265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.038299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.038422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.038457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.038562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.038599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.038771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.038806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.038914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.038948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.039080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.039114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.039215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.039249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.039355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.039405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.039533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.039567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.039669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.039709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.039854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.039888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.039989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.040024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.040146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.040179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.040283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.040317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.040448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.040482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.040578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.040612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.040720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.040756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.040870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.460 [2024-10-13 20:07:10.040916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.460 qpair failed and we were unable to recover it. 00:37:20.460 [2024-10-13 20:07:10.041036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.041070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.041175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.041209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.041322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.041356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.041505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.041551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.041663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.041701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.041811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.041845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.041972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.042005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.042110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.042144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.042285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.042318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.042492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.042529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.042637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.042671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.042798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.042843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.042966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.043001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.043104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.043139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.043244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.043278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.043406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.043441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.043579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.043615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.043742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.043776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.043899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.043944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.044081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.044119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.044243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.044278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.044416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.044456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.044590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.044639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.044781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.044827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.044973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.045010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.045132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.045170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.045306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.045344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.045478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.045515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.045648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.045703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.045856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.045900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.046081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.046131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.046257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.046291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.046407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.046445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.046598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.046641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.046794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.046870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.047037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.047091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.047236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.047284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.047416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.047451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.047556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.047590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.047699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.047732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.461 [2024-10-13 20:07:10.047846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.461 [2024-10-13 20:07:10.047880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.461 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.048025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.048070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.048175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.048209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.048309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.048343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.048458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.048502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.048624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.048665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.048845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.048885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.049043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.049081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.049240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.049279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.049410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.049443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.049573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.049607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.049732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.049777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.049924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.049961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.050105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.050142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.050291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.050325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.050459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.050493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.050620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.050654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.050762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.050802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.050902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.050935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.051039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.051089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.051202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.051248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.051413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.051446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.051540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.051574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.051730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.051770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.051889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.051926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.052078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.052116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.052259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.052302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.052507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.052541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.052644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.052677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.052813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.052847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.053011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.053043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.053208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.053258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.053404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.053458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.053574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.053608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.462 [2024-10-13 20:07:10.053763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.462 [2024-10-13 20:07:10.053802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.462 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.053944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.053978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.054074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.054108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.054227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.054277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.054423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.054458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.055286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.055324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.055445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.055480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.056243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.056279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.056490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.056524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.056631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.056664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.056801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.056837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.056947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.056991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.057136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.057186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.057348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.057382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.057510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.057552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.057688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.057722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.057907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.057945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.058116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.058168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.058347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.058389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.058524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.058561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.058699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.058732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.058899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.058933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.059040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.059075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.059203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.059237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.059430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.059480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.059588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.059623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.059771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.059804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.059942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.059975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.060105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.060142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.060265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.060302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.060481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.060515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.060611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.060644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.061443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.061482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.061631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.061667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.061805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.061838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.061944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.061978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.062119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.062153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.062277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.062311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.062449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.062485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.062597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.062631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.062795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.463 [2024-10-13 20:07:10.062833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.463 qpair failed and we were unable to recover it. 00:37:20.463 [2024-10-13 20:07:10.062964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.062998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.063115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.063152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.063270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.063318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.063502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.063549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.063678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.063722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.063881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.063936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.064054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.064090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.064210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.064268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.064466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.064502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.064634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.064670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.064809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.064856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.064996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.065031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.065161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.065195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.065310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.065352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.065493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.065530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.065641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.065676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.065832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.065867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.065978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.066013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.066131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.066170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.066316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.066352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.066492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.066528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.066645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.066682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.066829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.066884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.067082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.067118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.067261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.067297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.067422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.067458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.067563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.067598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.067734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.067771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.067900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.067936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.068092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.068162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.068311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.068357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.068529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.068569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.068691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.068730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.068873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.068923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.069033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.069080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.069255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.069291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.069403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.069448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.069581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.069630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.069790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.464 [2024-10-13 20:07:10.069828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.464 qpair failed and we were unable to recover it. 00:37:20.464 [2024-10-13 20:07:10.070011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.070064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.070207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.070245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.070365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.070413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.070554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.070589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.070741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.070776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.070949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.070998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.071150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.071188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.071328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.071385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.071500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.071535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.071669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.071716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.071885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.071923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.072038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.072075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.072275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.072324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.072462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.072502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.072633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.072687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.072843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.072896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.073041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.073076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.073189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.073226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.073344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.073388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.073518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.073552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.073659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.073711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.073863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.073902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.074023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.074061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.074206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.074244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.074375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.074423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.074608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.074662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.074837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.074880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.075038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.075077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.075201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.075241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.075373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.075449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.075550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.075585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.075737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.075784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.075945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.075984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.076127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.076165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.076336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.076385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.076520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.076569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.076725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.076788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.076914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.076970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.077115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.077169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.077287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.077322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.465 qpair failed and we were unable to recover it. 00:37:20.465 [2024-10-13 20:07:10.077447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.465 [2024-10-13 20:07:10.077489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.077603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.077638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.077771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.077821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.077967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.078003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.078177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.078211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.078323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.078360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.078509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.078551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.078674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.078717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.078829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.078864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.079040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.079075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.079183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.079218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.079387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.079441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.079591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.079641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.079787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.079831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.079967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.080007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.080160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.080200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.080366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.080461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.080585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.080621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.080817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.080872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.080987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.081028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.081195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.081269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.081432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.081469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.081628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.081695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.081848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.081898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.082085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.082125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.082251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.082290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.082466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.082515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.082639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.082695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.082875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.082915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.083067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.083106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.083221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.083264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.083426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.083476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.083597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.083633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.083816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.083871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.083999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.084040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.084182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.084240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.466 [2024-10-13 20:07:10.084364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.466 [2024-10-13 20:07:10.084418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.466 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.084531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.084567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.084712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.084750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.084935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.085003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.085179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.085241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.085367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.085420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.085549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.085583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.085717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.085771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.085951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.086001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.086162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.086200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.086317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.086365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.086513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.086548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.086678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.086723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.086882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.086928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.087081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.087119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.087258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.087296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.087443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.087509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.087652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.087699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.087875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.087928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.088063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.088102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.088236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.088273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.088433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.088468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.088571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.088607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.088739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.088780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.088910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.088963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.089120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.089158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.089285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.089324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.089469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.089504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.089608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.089642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.089806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.089844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.090054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.090093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.090274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.090314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.090466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.090502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.090616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.090651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.090769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.090818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.091009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.091048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.091166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.091220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.091344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.091389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.091557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.091616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.091773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.091822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.091999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.467 [2024-10-13 20:07:10.092037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.467 qpair failed and we were unable to recover it. 00:37:20.467 [2024-10-13 20:07:10.092191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.092228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.092385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.092455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.092561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.092595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.092782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.092842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.093043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.093096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.093319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.093376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.093532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.093568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.093726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.093772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.093955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.094012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.094177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.094232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.094347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.094409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.094540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.094574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.094723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.094768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.094875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.094908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.095096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.095159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.095298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.095338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.095493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.095529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.095707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.095756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.095886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.095955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.096167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.096239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.096438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.096474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.096607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.096656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.096826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.096885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.097067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.097124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.097298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.097335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.097497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.097533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.097698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.097750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.097972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.098031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.098188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.098226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.098387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.098435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.098584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.098634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.098778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.098815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.098927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.098982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.099152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.099194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.099371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.099414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.099550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.099586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.099718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.099767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.099946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.099994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.100122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.100161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.100288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.100327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.468 qpair failed and we were unable to recover it. 00:37:20.468 [2024-10-13 20:07:10.100480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.468 [2024-10-13 20:07:10.100513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.100648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.100682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.100900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.100949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.101082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.101141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.101250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.101288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.101422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.101475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.101606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.101640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.101779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.101843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.102010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.102048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.102203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.102241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.102424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.102492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.102654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.102722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.102889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.102926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.103052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.103104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.103222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.103260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.103383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.103426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.103573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.103606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.103776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.103839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.103980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.104035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.104179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.104218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.104378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.104429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.104602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.104651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.104794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.104840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.105021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.105085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.105306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.105376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.105511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.105545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.105698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.105743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.105888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.105939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.106143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.106232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.106407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.106442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.106590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.106645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.106794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.106830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.106977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.107041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.107173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.107213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.107336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.107374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.107550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.107606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.107753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.107800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.469 [2024-10-13 20:07:10.107924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.469 [2024-10-13 20:07:10.107969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.469 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.108152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.108189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.108301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.108334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.108485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.108521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.108649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.108708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.108877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.108934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.109083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.109136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.109285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.109326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.109465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.109499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.109619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.109653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.109823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.109881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.110109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.110167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.110289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.110328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.110510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.110546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.110683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.110718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.110874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.110929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.111089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.111123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.111281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.111327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.111489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.111544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.111682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.111722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.111936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.111973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.112087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.112125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.112276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.112314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.112482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.112531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.112681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.112717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.112816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.112867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.113015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.113063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.113275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.113313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.113489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.113524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.113668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.113703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.113845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.113879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.114137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.114192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.114339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.114376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.114540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.114590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.114832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.114893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.115038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.115113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.115317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.115378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.115529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.115564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.115686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.115755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.115878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.115919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.116042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.470 [2024-10-13 20:07:10.116112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.470 qpair failed and we were unable to recover it. 00:37:20.470 [2024-10-13 20:07:10.116265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.116304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.116476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.116525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.116650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.116700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.116873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.116936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.117071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.117106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.117233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.117284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.117458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.117514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.117689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.117751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.117878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.117915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.118084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.118121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.118238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.118289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.118451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.118486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.118595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.118628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.118730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.118764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.118889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.118924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.119074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.119112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.119276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.119314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.119471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.119506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.119616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.119654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.119835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.119895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.120059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.120112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.120271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.120306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.120443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.120478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.120628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.120694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.120929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.120988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.121178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.121254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.121404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.121459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.121640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.121678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.121788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.121826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.121977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.122014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.122253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.122307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.122454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.122499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.122672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.122727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.122903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.123004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.123177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.471 [2024-10-13 20:07:10.123242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.471 qpair failed and we were unable to recover it. 00:37:20.471 [2024-10-13 20:07:10.123384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.123434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.123536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.123571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.123673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.123727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.123894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.123933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.124060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.124098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.124247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.124284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.124441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.124475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.124583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.124627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.124791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.124828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.125007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.125045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.125198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.125242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.125373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.125419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.125602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.125636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.125736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.125798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.125939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.125976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.126135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.126172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.126304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.126342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.126510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.126545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.126673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.126721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.126888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.126925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.127077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.127115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.127288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.127325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.127491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.127541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.127687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.127728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.127870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.127933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.128070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.128109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.128250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.128287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.128451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.128501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.128615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.128650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.128789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.128823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.129019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.129057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.129262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.129338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.129467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.129502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.129621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.129655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.129838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.129872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.129991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.130030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.130219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.130256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.130373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.130414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.130531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.130566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.130692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.130727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.472 [2024-10-13 20:07:10.130839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.472 [2024-10-13 20:07:10.130873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.472 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.130992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.131026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.131195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.131233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.131371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.131432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.131556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.131590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.131694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.131730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.131918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.131952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.132128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.132166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.132306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.132344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.132480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.132515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.132664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.132713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.132828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.132880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.133032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.133069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.133189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.133226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.133410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.133464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.133646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.133695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.133870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.133944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.134105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.134158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.134283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.134317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.134483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.134518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.134659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.134718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.134874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.134910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.135011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.135045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.135207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.135244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.135400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.135435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.135541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.135576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.135683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.135717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.135826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.135877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.136047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.136085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.136240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.136278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.136421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.136498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.136610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.136648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.136831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.136870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.137012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.137062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.137187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.137224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.137382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.137425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.137528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.137562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.137718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.137768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.138003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.138044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.138231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.138270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.138465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.473 [2024-10-13 20:07:10.138501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.473 qpair failed and we were unable to recover it. 00:37:20.473 [2024-10-13 20:07:10.138669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.138703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.138831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.138869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.139020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.139060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.139260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.139298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.139471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.139506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.139635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.139669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.139832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.139876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.140041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.140079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.140224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.140285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.140495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.140531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.140667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.140728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.140868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.140904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.141059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.141112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.141253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.141290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.141410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.141473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.141608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.141642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.141798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.141848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.142031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.142065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.142266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.142303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.142458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.142494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.142665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.142705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.142909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.142944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.143130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.143179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.143298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.143335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.143519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.143568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.143718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.143768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.143919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.143984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.144154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.144221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.144391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.144433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.144576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.144610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.144733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.144780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.144917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.144954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.145079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.145118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.145265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.145309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.145494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.145542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.145666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.145715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.145826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.145874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.146039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.146095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.146279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.146332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.474 [2024-10-13 20:07:10.146497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.474 [2024-10-13 20:07:10.146541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.474 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.146739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.146792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.147026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.147095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.147253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.147291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.147479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.147514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.147687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.147741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.147886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.147928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.148061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.148100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.148230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.148268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.148387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.148449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.148588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.148623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.148861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.148899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.149049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.149086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.149213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.149251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.149411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.149446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.149589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.149622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.149757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.149803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.149978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.150017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.150133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.150170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.150285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.150335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.150500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.150550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.150741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.150800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.151035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.151095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.151223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.151261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.151431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.151469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.151627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.151680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.151847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.151916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.152117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.152216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.152356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.152406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.152535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.152570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.152693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.152740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.152899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.152952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.153076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.153114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.153315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.153352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.153489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.153524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.153658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.153719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.475 [2024-10-13 20:07:10.153895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.475 [2024-10-13 20:07:10.153928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.475 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.154051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.154109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.154235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.154273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.154432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.154466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.154578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.154612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.154737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.154770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.154927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.154988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.155136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.155173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.155286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.155324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.155489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.155523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.155630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.155665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.155820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.155866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.156041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.156081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.156214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.156259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.156427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.156480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.156621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.156655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.156793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.156827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.156935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.156987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.157241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.157279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.157427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.157506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.157686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.157724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.157837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.157884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.158050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.158089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.158274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.158329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.158497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.158535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.158646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.158680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.158827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.158870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.158999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.159051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.159199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.159236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.159346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.159383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.159516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.159550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.159671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.159705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.159806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.159840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.159967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.160004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.160110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.160147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.160272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.160313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.160505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.160554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.160735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.160772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.160933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.160985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.161152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.161207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.161355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.476 [2024-10-13 20:07:10.161413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.476 qpair failed and we were unable to recover it. 00:37:20.476 [2024-10-13 20:07:10.161525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.161576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.161687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.161721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.161863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.161896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.161995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.162028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.162173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.162207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.162338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.162372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.162501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.162549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.162730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.162770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.162948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.162998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.163150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.163188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.163309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.163347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.163490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.163524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.163666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.163700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.163880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.163917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.164120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.164158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.164321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.164361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.164531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.164582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.164755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.164810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.164973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.165037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.165230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.165289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.165418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.165467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.165564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.165597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.165706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.165741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.165891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.165925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.166077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.166126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.166268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.166315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.166439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.166490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.166627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.166665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.166787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.166833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.166967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.167002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.167126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.167160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.167275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.167313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.167457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.167506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.167632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.167669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.167837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.167936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.168072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.168108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.168261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.168296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.168442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.168477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.168641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.168706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.168866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.477 [2024-10-13 20:07:10.168906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.477 qpair failed and we were unable to recover it. 00:37:20.477 [2024-10-13 20:07:10.169028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.169072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.169235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.169270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.169408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.169443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.169563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.169601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.169726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.169765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.169914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.169951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.170092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.170129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.170243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.170276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.170432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.170482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.170629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.170677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.170848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.170915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.171043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.171097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.171264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.171299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.171431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.171466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.171610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.171653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.171777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.171815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.171941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.171987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.172126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.172164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.172334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.172372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.172575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.172625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.172745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.172812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.173026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.173091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.173247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.173286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.173430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.173466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.173622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.173704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.173945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.174020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.174317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.174385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.174625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.174660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.174877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.174949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.175189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.175249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.175420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.175474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.175604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.175653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.175822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.175865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.176004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.176064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.176220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.176284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.176450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.176484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.176613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.176646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.176826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.176863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.177087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.177124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.177271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.177308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.478 [2024-10-13 20:07:10.177476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-10-13 20:07:10.177529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.478 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.177712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.177763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.177903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.177943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.178091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.178128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.178333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.178377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.178562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.178613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.178778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.178826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.178965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.179019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.179213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.179270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.179423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.179477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.179608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.179641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.179836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.179896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.180107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.180152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.180268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.180319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.180438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.180489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.180611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.180644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.180780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.180823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.180942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.181001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.181142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.181182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.181318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.181352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.181474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.181511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.181643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.181705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.181910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.181949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.182088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.182138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.182282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.182321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.182473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.182523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.182713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.182762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.182943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.182996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.183243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-10-13 20:07:10.183311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.479 qpair failed and we were unable to recover it. 00:37:20.479 [2024-10-13 20:07:10.183499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.183534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.183693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.183738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.183895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.183933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.184060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.184112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.184268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.184307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.184443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.184510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.184660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.184696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.184862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.184900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.185069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.185108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.185249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.185282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.185433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.185494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.185615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.185658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.185775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.185810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.185927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.185990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.186164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.186227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.186350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.186406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.186537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.186586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.186819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.186878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.186998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.187048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.187221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.187258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.187411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.187468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.187618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.187667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.187839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.187909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.188105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.188161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.188304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.188339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.188518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.188554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.188689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.188724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.188881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.188940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.189190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.189248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.189388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.189430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.189555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.189589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.189722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.189755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.189889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.189923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.190095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.190132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.190252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.190303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.190486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.190520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.190636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.190696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.190905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.190971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.191176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.191229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.480 [2024-10-13 20:07:10.191414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-10-13 20:07:10.191469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.480 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.191608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.191643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.191800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.191861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.192007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.192071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.192223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.192257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.192353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.192405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.192555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.192606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.192766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.192819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.193001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.193051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.193209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.193245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.193350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.193399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.193534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.193604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.193782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.193853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.193975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.194013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.194186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.194235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.194391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.194435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.194565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.194625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.194820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.194860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.195100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.195139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.195280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.195319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.195478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.195513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.195636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.195680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.195795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.195847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.196036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.196098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.196257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.196295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.196434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.196486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.196633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.196668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.196812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.196850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.197033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.197071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.197216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.197255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.197402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.197438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.197622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.197660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.197892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.197930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.198145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.198183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.198308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.198345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.198508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.198551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.198651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.198684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.198817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.198858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.199028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.199066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.199213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.199251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.199403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.481 [2024-10-13 20:07:10.199456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.481 qpair failed and we were unable to recover it. 00:37:20.481 [2024-10-13 20:07:10.199584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.199632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.199834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.199900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.200050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.200110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.200219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.200254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.200388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.200434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.200572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.200627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.200806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.200858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.201040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.201106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.201329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.201413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.201583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.201619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.201801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.201839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.202030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.202097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.202266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.202324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.202526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.202564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.202792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.202846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.202976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.203029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.203223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.203264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.203431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.203468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.203602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.203637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.203772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.203810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.203946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.203999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.204180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.204217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.204350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.204417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.204618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.204667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.204854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.204920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.205111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.205178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.205286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.205320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.205472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.205521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.205671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.205715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.205823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.205857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.206001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.206039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.206196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.206255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.206423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.206482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.206614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.206649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.206819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.206857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.206997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.207035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.207179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.207218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.207407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.207466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.207590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.207649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.207883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.482 [2024-10-13 20:07:10.207921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.482 qpair failed and we were unable to recover it. 00:37:20.482 [2024-10-13 20:07:10.208102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.208168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.208316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.208353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.208567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.208615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.208781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.208835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.208964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.209016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.209169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.209240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.209389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.209434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.209568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.209602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.209735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.209774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.209940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.209976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.210155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.210194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.210334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.210383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.210526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.210559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.210698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.210750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.210872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.210905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.211033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.211070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.211189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.211226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.211331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.211367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.211540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.211573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.211706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.211774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.211898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.211937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.212163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.212201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.212360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.212412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.212568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.212602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.212724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.212757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.212920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.212959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.213128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.213165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.213287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.213320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.213449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.213483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.213644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.213677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.213839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.213872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.213980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.214013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.214144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.214178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.214346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.214386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.214502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.214537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.214655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.214713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.214861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.214899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.215106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.215179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.215363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.215417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.215568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.215602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.483 qpair failed and we were unable to recover it. 00:37:20.483 [2024-10-13 20:07:10.215748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.483 [2024-10-13 20:07:10.215781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.215908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.215941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.216056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.216093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.216243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.216283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.216468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.216519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.216657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.216714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.216879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.216949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.217104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.217160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.217318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.217353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.217527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.217561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.217690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.217740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.217958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.218040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.218204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.218264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.218427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.218461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.218583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.218632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.218787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.218835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.219039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.219115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.219234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.219268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.219379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.219419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.219544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.219577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.219693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.219730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.219865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.219902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.220037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.220074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.220263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.220299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.220509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.220559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.220730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.220772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.220976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.221014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.221156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.221195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.221345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.221383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.221523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.221557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.221698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.221732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.221908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.221943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.222140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.222177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.222287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.222325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.222520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.222570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.484 [2024-10-13 20:07:10.222771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.484 [2024-10-13 20:07:10.222812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.484 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.222929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.222967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.223191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.223262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.223410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.223463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.223579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.223614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.223799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.223836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.224070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.224120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.224270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.224307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.224417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.224470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.224605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.224640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.224756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.224791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.224924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.224975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.225080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.225117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.225283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.225320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.225474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.225523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.225650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.225699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.225834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.225876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.226003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.226038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.226170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.226209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.226359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.226400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.226574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.226610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.226715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.226760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.226919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.226952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.227040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.227090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.227244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.227282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.227414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.227449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.227604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.227638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.227770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.227804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.227961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.227999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.228154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.228194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.228374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.228420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.228556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.228591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.228701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.228735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.228905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.228943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.229081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.229134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.229316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.229354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.229485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.229519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.229678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.229713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.229866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.229905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.230120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.230176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.485 [2024-10-13 20:07:10.230281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.485 [2024-10-13 20:07:10.230319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.485 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.230487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.230523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.230719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.230773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.231098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.231180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.231322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.231359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.231520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.231555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.231718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.231751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.231953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.232022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.232233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.232291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.232446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.232480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.232610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.232643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.232802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.232839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.233038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.233075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.233183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.233220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.233377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.233421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.233566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.233600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.233776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.233819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.233976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.234013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.234190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.234235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.234347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.234384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.234545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.234587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.234715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.234763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.234908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.234943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.235098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.235152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.235331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.235384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.235550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.235599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.235767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.235817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.235939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.235979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.236109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.236210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.236379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.236444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.236607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.236655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.236836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.236872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.237041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.237081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.237222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.237260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.237377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.237439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.237551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.237586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.486 [2024-10-13 20:07:10.237739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.486 [2024-10-13 20:07:10.237785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.486 qpair failed and we were unable to recover it. 00:37:20.770 [2024-10-13 20:07:10.237953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.770 [2024-10-13 20:07:10.237991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.770 qpair failed and we were unable to recover it. 00:37:20.770 [2024-10-13 20:07:10.238165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.770 [2024-10-13 20:07:10.238221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.770 qpair failed and we were unable to recover it. 00:37:20.770 [2024-10-13 20:07:10.238408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.770 [2024-10-13 20:07:10.238461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.770 qpair failed and we were unable to recover it. 00:37:20.770 [2024-10-13 20:07:10.238570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.770 [2024-10-13 20:07:10.238603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.770 qpair failed and we were unable to recover it. 00:37:20.770 [2024-10-13 20:07:10.238745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.770 [2024-10-13 20:07:10.238797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.770 qpair failed and we were unable to recover it. 00:37:20.770 [2024-10-13 20:07:10.238970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.239007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.239219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.239257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.239409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.239461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.239574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.239610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.239738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.239772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.239881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.239915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.240063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.240112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.240269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.240324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.240475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.240511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.240643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.240678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.240855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.240904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.241068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.241121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.241324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.241377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.241526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.241561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.241675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.241712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.241857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.241891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.242043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.242094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.242223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.242261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.242370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.242417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.242564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.242597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.242706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.242741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.242880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.242915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.243091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.243128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.243260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.243294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.243421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.243457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.243593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.243626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.243727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.243760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.243920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.243954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.244119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.244157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.244293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.244330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.244489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.244525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.244663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.244712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.244870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.244930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.245112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.245166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.245299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.245334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.245507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.245556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.245735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.245773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.245991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.771 [2024-10-13 20:07:10.246054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.771 qpair failed and we were unable to recover it. 00:37:20.771 [2024-10-13 20:07:10.246229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.246267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.246410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.246464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.246586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.246620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.246842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.246905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.247101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.247157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.247295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.247346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.247483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.247517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.247640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.247711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.247868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.247907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.248034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.248098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.248209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.248248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.248389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.248450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.248558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.248593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.248765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.248805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.249004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.249038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.249298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.249335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.249499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.249532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.249663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.249732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.249918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.249971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.250124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.250163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.250310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.250347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.250531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.250567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.250706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.250754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.250895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.250934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.251058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.251109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.251250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.251286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.251458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.251492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.251587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.251621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.251727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.251760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.251907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.251944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.252108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.252145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.252266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.252303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.252464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.252498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.252622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.252656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.252751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.252785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.252912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.772 [2024-10-13 20:07:10.252949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.772 qpair failed and we were unable to recover it. 00:37:20.772 [2024-10-13 20:07:10.253150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.253187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.253327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.253364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.253533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.253567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.253668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.253702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.253861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.253911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.254063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.254096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.254344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.254381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.254531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.254571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.254672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.254706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.254830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.254863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.254984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.255033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.255145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.255182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.255361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.255408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.255545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.255578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.255710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.255743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.255844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.255878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.255977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.256010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.256130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.256167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.256363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.256409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.256560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.256609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.256780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.256836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.256988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.257024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.257132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.257168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.257372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.257432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.257578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.257614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.257728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.257762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.257871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.257904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.258033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.258067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.258203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.258236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.258421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.258454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.258551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.258585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.258714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.258747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.258901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.258938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.259094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.259127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.259263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.259297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.259495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.259544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.259685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.259722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.259835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.259871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.260010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.260045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.260172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.773 [2024-10-13 20:07:10.260207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.773 qpair failed and we were unable to recover it. 00:37:20.773 [2024-10-13 20:07:10.260313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.260348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.260464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.260500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.260633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.260666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.260818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.260855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.260963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.261000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.261150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.261183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.261301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.261350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.261487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.261529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.261663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.261698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.261824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.261875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.262143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.262203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.262328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.262362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.262505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.262540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.262676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.262728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.262884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.262917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.263016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.263050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.263185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.263228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.263392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.263436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.263549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.263584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.263830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.263891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.264045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.264079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.264219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.264274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.264476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.264511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.264638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.264672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.264858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.264938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.265193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.265252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.265408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.265442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.265576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.265609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.265721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.265757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.265914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.265948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.266111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.266166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.266320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.266357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.266542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.266577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.266685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.266718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.266826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.266859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.266967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.267001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.267149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.267185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.267322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.267373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.267505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.267540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.267676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.267710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.774 qpair failed and we were unable to recover it. 00:37:20.774 [2024-10-13 20:07:10.267860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.774 [2024-10-13 20:07:10.267909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.268094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.268131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.268284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.268323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.268476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.268510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.268664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.268697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.268910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.268973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.269173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.269232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.269406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.269445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.269552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.269585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.269740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.269773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.269904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.269937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.270066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.270117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.270303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.270337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.270474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.270507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.270664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.270717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.270884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.270921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.271035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.271068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.271209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.271244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.271409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.271462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.271599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.271633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.271792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.271825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.271960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.272004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.272107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.272141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.272262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.272317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.272520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.272555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.272658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.272691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.272851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.272884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.273068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.273105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.273335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.273371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.273502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.273535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.273668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.273702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.273831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.273866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.273995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.274045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.274212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.775 [2024-10-13 20:07:10.274246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.775 qpair failed and we were unable to recover it. 00:37:20.775 [2024-10-13 20:07:10.274362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.274403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.274509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.274542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.274691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.274741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.274926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.274963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.275135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.275184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.275325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.275359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.275507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.275541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.275678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.275712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.275870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.275920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.276100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.276133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.276275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.276313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.276472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.276512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.276650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.276685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.276859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.276902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.277055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.277095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.277242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.277276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.277416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.277451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.277550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.277584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.277691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.277724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.277866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.277917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.278064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.278101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.278242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.278275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.278433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.278467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.278572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.278608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.278748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.278781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.278875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.278926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.279096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.279133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.279311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.279349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.279513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.279563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.279679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.279715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.279852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.279886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.280056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.280152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.280296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.280333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.280484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.280518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.280622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.280655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.280771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.280805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.280947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.280981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.281089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.281142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.281287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.281324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.281479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.776 [2024-10-13 20:07:10.281514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.776 qpair failed and we were unable to recover it. 00:37:20.776 [2024-10-13 20:07:10.281647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.281681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.281780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.281814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.281974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.282009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.282169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.282231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.282477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.282511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.282610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.282643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.282770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.282805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.282929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.282962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.283098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.283132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.283288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.283326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.283484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.283533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.283671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.283708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.283856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.283892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.284025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.284065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.284225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.284260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.284435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.284501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.284647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.284703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.284884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.284917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.285047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.285081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.285227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.285265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.285467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.285502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.285660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.285710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.285881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.285918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.286094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.286127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.286236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.286270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.286391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.286453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.286593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.286627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.286734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.286786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.286937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.286990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.287163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.287197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.287324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.287360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.287536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.287586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.287754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.287790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.287888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.287943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.288155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.288217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.288361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.288409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.288568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.288617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.288785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.288824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.288984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.289019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.289156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.289190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.777 qpair failed and we were unable to recover it. 00:37:20.777 [2024-10-13 20:07:10.289296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.777 [2024-10-13 20:07:10.289329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.289461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.289495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.289649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.289717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.289884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.289945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.290126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.290161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.290300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.290337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.290483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.290518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.290657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.290691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.290819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.290852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.290997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.291035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.291193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.291234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.291457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.291506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.291676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.291711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.291845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.291885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.292029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.292062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.292199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.292233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.292341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.292375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.292502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.292537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.292670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.292703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.292803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.292836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.292968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.293001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.293131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.293164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.293302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.293336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.293453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.293489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.293617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.293651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.293769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.293802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.293943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.293986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.294136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.294170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.294302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.294333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.294430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.294464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.294557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.294588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.294701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.294732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.294863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.294896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.295015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.295050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.295162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.295193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.295387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.295432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.295585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.295633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.295776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.295818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.295932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.295966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.296081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.296146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.778 [2024-10-13 20:07:10.296332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.778 [2024-10-13 20:07:10.296371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.778 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.296506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.296542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.296712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.296747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.296877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.296910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.297024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.297057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.297232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.297270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.297451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.297487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.297634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.297671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.297873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.297911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.298094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.298129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.298240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.298274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.298454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.298490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.298592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.298626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.298785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.298824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.298964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.298996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.299111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.299144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.299250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.299282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.299408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.299458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.299607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.299644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.299834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.299891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.300103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.300162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.300292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.300325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.300458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.300491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.300591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.300624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.300765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.300799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.300945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.300982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.301159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.301223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.301386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.301430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.301532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.301565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.301814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.301869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.302030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.302063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.302166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.302199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.302341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.302381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.302525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.302558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.302670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.302703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.302855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.779 [2024-10-13 20:07:10.302894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.779 qpair failed and we were unable to recover it. 00:37:20.779 [2024-10-13 20:07:10.303053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.303086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.303218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.303272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.303430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.303483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.303615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.303649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.303770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.303841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.304000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.304040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.304205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.304241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.304348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.304383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.304529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.304575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.304684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.304728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.304833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.304869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.305019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.305055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.305245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.305278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.305454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.305488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.305598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.305633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.305738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.305771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.305909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.305958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.306098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.306162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.306344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.306377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.306541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.306579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.306722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.306782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.306930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.306966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.307112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.307147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.307343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.307377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.307556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.307590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.307752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.307785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.307896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.307930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.308098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.308132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.308262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.308319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.308497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.308547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.308663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.308700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.308823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.308858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.309015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.309048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.309178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.309212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.309314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.309348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.309460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.309508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.309618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.309650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.309773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.309805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.309904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.309938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.310065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.310097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.780 qpair failed and we were unable to recover it. 00:37:20.780 [2024-10-13 20:07:10.310293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.780 [2024-10-13 20:07:10.310348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.310520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.310555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.310719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.310754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.310896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.310931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.311043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.311076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.311205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.311239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.311371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.311443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.311557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.311595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.311726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.311770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.311927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.311964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.312086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.312122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.312244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.312277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.312428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.312477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.312625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.312661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.312830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.312864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.313025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.313065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.313191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.313228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.313402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.313440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.313548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.313581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.313705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.313739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.313882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.313915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.314047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.314112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.314300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.314337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.314469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.314515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.314626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.314659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.314801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.314837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.314965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.315008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.315135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.315188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.315333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.315371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.315512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.315546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.315652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.315694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.315826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.315869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.315978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.316023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.316170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.316211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.316366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.316421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.316552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.316598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.316810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.316871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.317041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.317077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.317252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.317287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.317430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.317465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.781 [2024-10-13 20:07:10.317571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.781 [2024-10-13 20:07:10.317605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.781 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.317735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.317800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.317958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.317999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.318134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.318168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.318294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.318346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.318521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.318572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.318729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.318765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.318901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.318937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.319113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.319171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.319340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.319384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.319529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.319575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.319723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.319758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.319911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.319945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.320123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.320165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.320317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.320357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.320501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.320546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.320660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.320704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.320871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.320909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.321020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.321054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.321190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.321241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.321386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.321447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.321601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.321637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.321772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.321812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.322003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.322078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.322246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.322282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.322433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.322468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.322584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.322617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.322753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.322787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.322913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.322947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.323088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.323135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.323295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.323330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.323507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.323561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.323737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.323777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.323946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.323981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.324086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.324121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.324223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.324258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.324388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.324430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.324572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.324608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.324749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.324784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.324909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.324944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.325041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.782 [2024-10-13 20:07:10.325075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.782 qpair failed and we were unable to recover it. 00:37:20.782 [2024-10-13 20:07:10.325204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.325241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.325343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.325383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.325534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.325569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.325707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.325746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.325907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.325952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.326076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.326112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.326249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.326282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.326437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.326470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.326577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.326611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.326748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.326782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.326887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.326921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.327039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.327075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.327201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.327236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.327359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.327408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.327542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.327574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.327679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.327719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.327889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.327927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.328076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.328113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.328292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.328331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.328505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.328541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.328667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.328728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.328875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.328914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.329076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.329111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.329248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.329281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.329462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.329499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.329606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.329640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.329781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.329816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.330026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.330060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.330219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.330253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.330446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.330496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.330619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.330653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.330805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.330839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.330950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.331004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.331149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.331196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.331339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.331374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.331505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.783 [2024-10-13 20:07:10.331540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.783 qpair failed and we were unable to recover it. 00:37:20.783 [2024-10-13 20:07:10.331644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.331687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.331819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.331852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.331958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.332009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.332118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.332166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.332319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.332352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.332507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.332556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.332717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.332753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.332879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.332927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.333136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.333192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.333326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.333362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.333503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.333538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.333650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.333713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.333867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.333906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.334016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.334053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.334195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.334232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.334373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.334423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.334583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.334616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.334757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.334798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.334953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.335010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.335111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.335145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.335290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.335324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.335484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.335534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.335667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.335720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.335860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.335895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.336015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.336050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.336158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.336191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.336323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.336357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.336510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.336546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.336647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.336693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.336820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.336853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.336988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.337022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.337157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.337194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.337335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.337372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.337499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.337550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.337704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.337759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.337922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.337964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.338118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.338156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.338274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.338307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.338428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.784 [2024-10-13 20:07:10.338462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.784 qpair failed and we were unable to recover it. 00:37:20.784 [2024-10-13 20:07:10.338565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.338598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.338757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.338796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.338975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.339013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.339169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.339207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.339327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.339383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.339511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.339545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.339660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.339707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.339900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.339938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.340083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.340125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.340275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.340313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.340446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.340480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.340584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.340618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.340781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.340814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.341031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.341068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.341178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.341215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.341368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.341415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.341556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.341590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.341729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.341798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.341960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.342022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.342163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.342218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.342362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.342417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.342549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.342584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.342707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.342745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.342910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.342949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.343134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.343188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.343366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.343422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.343572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.343621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.343809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.343848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.344049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.344087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.344232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.344270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.344390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.344455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.344560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.344595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.344703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.344737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.344935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.344974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.345158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.345197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.345341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.345415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.345546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.345585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.345724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.345759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.345933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.345970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.346102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.785 [2024-10-13 20:07:10.346170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.785 qpair failed and we were unable to recover it. 00:37:20.785 [2024-10-13 20:07:10.346333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.346366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.346508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.346566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.346702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.346750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.346907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.346967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.347090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.347128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.347295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.347333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.347470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.347503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.347617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.347652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.347812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.347856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.348074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.348113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.348262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.348300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.348485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.348535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.348666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.348713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.348864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.348899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.349022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.349057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.349234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.349267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.349367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.349419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.349547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.349581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.349753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.349786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.349937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.349976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.350125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.350162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.350275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.350325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.350491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.350541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.350661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.350708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.350845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.350878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.350997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.351030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.351132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.351167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.351331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.351363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.351484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.351519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.351646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.351707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.351884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.351920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.352036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.352089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.352241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.352279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.352444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.352479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.352612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.352645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.352879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.352929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.353096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.353129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.353288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.353320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.353439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.353475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.353575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.353611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.786 qpair failed and we were unable to recover it. 00:37:20.786 [2024-10-13 20:07:10.353808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.786 [2024-10-13 20:07:10.353845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.353999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.354054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.354234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.354268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.354388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.354441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.354570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.354603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.354721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.354773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.354884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.354916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.355069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.355107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.355294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.355339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.355492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.355528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.355686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.355755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.355943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.355981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.356163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.356201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.356344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.356391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.356554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.356589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.356692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.356752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.356892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.356929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.357055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.357090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.357194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.357227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.357389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.357448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.357578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.357611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.357745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.357778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.357918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.357952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.358101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.358135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.358297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.358348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.358496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.358531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.358659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.358703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.358835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.358897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.359019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.359059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.359214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.359247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.359428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.359488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.359617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.359651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.359768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.359802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.359939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.359997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.360139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.360176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.360341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.360382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.360505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.360538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.360668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.360712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.360834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.360883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.361077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.361130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.787 [2024-10-13 20:07:10.361281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.787 [2024-10-13 20:07:10.361335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.787 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.361478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.361513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.361659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.361708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.361844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.361881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.362043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.362083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.362224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.362262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.362405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.362474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.362638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.362675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.362843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.362888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.363035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.363072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.363216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.363254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.363409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.363443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.363545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.363583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.363739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.363794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.363956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.363997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.364176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.364236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.364388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.364450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.364605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.364640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.364784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.364818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.364990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.365070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.365237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.365276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.365458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.365493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.365650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.365684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.365845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.365894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.366045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.366087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.366236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.366274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.366423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.366456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.366596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.366630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.366851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.366896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.367092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.367129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.367248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.367288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.367402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.367456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.367564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.367599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.367765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.367800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.367976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.368032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.368163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.368217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.788 qpair failed and we were unable to recover it. 00:37:20.788 [2024-10-13 20:07:10.368364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.788 [2024-10-13 20:07:10.368417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.368540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.368575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.368742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.368792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.368961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.369017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.369181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.369238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.369401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.369436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.369589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.369639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.369812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.369860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.370066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.370105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.370220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.370259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.370370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.370417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.370584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.370621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.370778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.370820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.370983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.371021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.371184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.371221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.371334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.371372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.371581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.371630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.371763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.371799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.371989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.372058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.372251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.372313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.372502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.372537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.372699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.372762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.372877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.372913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.373055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.373111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.373295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.373327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.373479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.373513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.373702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.373759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.373945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.374046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.374206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.374259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.374376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.374421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.374534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.374568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.374719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.374769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.374899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.374937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.375123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.375182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.375316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.375354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.375486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.375522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.375691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.375728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.375926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.375979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.376110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.376169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.376311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.376351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.789 [2024-10-13 20:07:10.376482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.789 [2024-10-13 20:07:10.376531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.789 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.376691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.376760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.376999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.377061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.377220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.377259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.377366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.377417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.377602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.377638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.377784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.377860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.378054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.378168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.378364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.378412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.378541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.378575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.378722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.378757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.378860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.378894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.379024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.379064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.379215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.379252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.379392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.379456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.379565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.379601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.379778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.379841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.379967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.380021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.380218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.380272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.380387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.380445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.380580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.380634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.380769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.380808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.380997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.381055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.381283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.381320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.381464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.381501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.381662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.381722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.381846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.381887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.382014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.382049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.382177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.382212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.382338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.382386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.382520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.382555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.382665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.382699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.382827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.382867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.383026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.383060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.383194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.383229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.383354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.383404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.383522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.383569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.383677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.383711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.383839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.383883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.384016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.384050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.384180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.790 [2024-10-13 20:07:10.384214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.790 qpair failed and we were unable to recover it. 00:37:20.790 [2024-10-13 20:07:10.384331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.384381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.384544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.384579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.384700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.384749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.384891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.384935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.385073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.385107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.385236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.385270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.385416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.385453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.385569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.385607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.385764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.385799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.386027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.386092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.386297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.386341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.386483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.386524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.386659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.386695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.386853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.386892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.387144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.387200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.387314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.387352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.387516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.387565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.387672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.387709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.387909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.388000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.388169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.388249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.388392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.388453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.388588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.388622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.388770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.388807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.389031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.389089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.389265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.389302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.389495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.389530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.389673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.389707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.389823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.389864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.390016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.390053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.390184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.390218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.390401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.390436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.390559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.390593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.390769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.390825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.391016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.391058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.391239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.391278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.391445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.391479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.391614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.391649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.391841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.391879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.392067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.392114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.392285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.791 [2024-10-13 20:07:10.392323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.791 qpair failed and we were unable to recover it. 00:37:20.791 [2024-10-13 20:07:10.392501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.392551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.392683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.392739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.392892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.392927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.393117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.393184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.393359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.393404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.393547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.393581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.393782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.393856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.394136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.394194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.394352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.394386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.394496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.394529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.394687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.394725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.394871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.394910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.395097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.395182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.395331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.395370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.395516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.395552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.395665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.395726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.395878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.395916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.396036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.396087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.396234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.396273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.396439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.396473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.396606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.396641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.396799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.396844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.397012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.397065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.397291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.397329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.397525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.397567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.397677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.397711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.397879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.397917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.398168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.398227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.398377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.398424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.398566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.398614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.398833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.398890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.399153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.399238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.399409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.399445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.399614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.399648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.399808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.399862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.792 qpair failed and we were unable to recover it. 00:37:20.792 [2024-10-13 20:07:10.400094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.792 [2024-10-13 20:07:10.400154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.400300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.400344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.400476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.400513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.400640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.400698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.400845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.400882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.400981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.401027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.401157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.401195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.401342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.401409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.401601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.401649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.401940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.401996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.402130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.402193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.402320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.402353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.402495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.402530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.402652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.402707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.402886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.402949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.403210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.403274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.403414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.403455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.403590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.403625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.403758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.403797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.403962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.404007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.404118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.404152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.404284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.404318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.404467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.404517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.404712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.404757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.404963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.405060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.405269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.405329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.405521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.405556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.405699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.405741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.405922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.405993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.406105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.406142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.406281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.406314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.406468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.406517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.406654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.406724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.406906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.406954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.793 qpair failed and we were unable to recover it. 00:37:20.793 [2024-10-13 20:07:10.407113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.793 [2024-10-13 20:07:10.407151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.407270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.407318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.407452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.407488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.407623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.407659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.407800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.407838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.408008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.408048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.408221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.408258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.408408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.408460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.408626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.408661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.408850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.408932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.409188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.409254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.409383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.409432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.409565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.409599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.409752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.409807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.410075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.410136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.410289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.410328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.410505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.410541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.410670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.410704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.410808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.410864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.411040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.411077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.411255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.411302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.411427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.411480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.411602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.411641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.411770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.411804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.411935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.411974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.412148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.412202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.412384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.412428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.412535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.412569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.412667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.412700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.412824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.412858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.413031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.413069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.413211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.413248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.413388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.413452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.413582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.413615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.413746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.413780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.413898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.413936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.414109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.414146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.414288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.414325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.414442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.414495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.414628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.414663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.794 [2024-10-13 20:07:10.414791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.794 [2024-10-13 20:07:10.414825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.794 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.414982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.415016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.415188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.415222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.415390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.415431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.415546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.415592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.415777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.415818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.415949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.416008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.416173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.416213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.416363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.416407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.416563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.416611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.416779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.416815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.416966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.417003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.417199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.417257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.417383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.417423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.417555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.417588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.417760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.417798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.417900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.417937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.418099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.418152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.418294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.418331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.418494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.418529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.418636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.418669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.418868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.418902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.419127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.419170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.419318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.419355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.419543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.419592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.419763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.419801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.419985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.420045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.420191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.420230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.420380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.420422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.420540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.420589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.420697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.420732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.420881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.420919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.421145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.421182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.421337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.421374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.421531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.421565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.421698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.421732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.422020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.422079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.422246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.422309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.422469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.422503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.422613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.422646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.795 [2024-10-13 20:07:10.422754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.795 [2024-10-13 20:07:10.422788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.795 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.422944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.422977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.423113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.423152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.423311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.423345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.423479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.423513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.423621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.423654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.423774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.423823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.423941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.423978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.424157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.424211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.424313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.424366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.424497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.424550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.424727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.424782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.424936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.424977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.425119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.425158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.425312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.425349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.425490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.425525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.425681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.425730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.425867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.425902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.426060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.426113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.426225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.426262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.426376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.426422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.426575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.426609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.426717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.426755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.426870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.426904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.427082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.427120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.427286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.427323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.427506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.427555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.427721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.427763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.427906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.427944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.428086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.428151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.428321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.428359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.428491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.428525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.428671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.428706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.428829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.428882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.429136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.429193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.429332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.429367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.429489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.429524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.429667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.429706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.429905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.429942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.430133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.430190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.430339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.796 [2024-10-13 20:07:10.430375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.796 qpair failed and we were unable to recover it. 00:37:20.796 [2024-10-13 20:07:10.430537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.430571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.430685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.430738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.430849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.430886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.431088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.431125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.431234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.431272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.431422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.431473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.431568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.431601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.431710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.431743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.431871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.431908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.432016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.432050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.432199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.432236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.432378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.432422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.432580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.432614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.432798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.432835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.432980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.433017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.433204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.433241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.433385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.433448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.433582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.433616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.433745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.433779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.433884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.433935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.434179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.434217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.434372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.434423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.434587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.434622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.434763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.434796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.435016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.435054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.435189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.435225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.435361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.435405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.435539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.435572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.435701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.435750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.435902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.435951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.436179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.436216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.436332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.436371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.436507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.436541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.436670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.436704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.436868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.797 [2024-10-13 20:07:10.436951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.797 qpair failed and we were unable to recover it. 00:37:20.797 [2024-10-13 20:07:10.437101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.437139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.437315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.437367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.437522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.437571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.437689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.437725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.437858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.437895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.438043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.438080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.438202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.438239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.438392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.438436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.438536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.438571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.438706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.438740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.438905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.438939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.439087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.439124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.439289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.439326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.439480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.439519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.439644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.439677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.439836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.439870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.439977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.440010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.440117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.440153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.440319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.440356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.440498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.440533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.440688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.440722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.440831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.440865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.440997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.441030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.441133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.441192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.441330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.441367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.441511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.441544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.441668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.441701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.441825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.441862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.442050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.442083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.442216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.442249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.442375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.442414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.442545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.442579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.442712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.442766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.442881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.442918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.443048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.443081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.443181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.443215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.443360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.443411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.443538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.443571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.443728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.443762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.443931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.443968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.798 [2024-10-13 20:07:10.444126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.798 [2024-10-13 20:07:10.444159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.798 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.444262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.444298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.444470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.444505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.444634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.444667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.444789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.444840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.444983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.445020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.445166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.445200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.445324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.445373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.445535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.445571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.445696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.445730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.445860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.445894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.446044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.446078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.446199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.446238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.446372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.446424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.446542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.446589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.446727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.446761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.446863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.446897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.447004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.447038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.447167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.447201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.447299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.447333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.447445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.447480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.447606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.447638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.447788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.447825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.447936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.447973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.448154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.448187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.448323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.448378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.448514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.448551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.448726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.448763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.448913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.448970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.449109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.449162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.449318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.449351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.449495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.449529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.449662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.449695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.449804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.449838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.450010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.450047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.450196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.450233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.450354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.450389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.450509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.450544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.450651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.450686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.450791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.450826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.799 [2024-10-13 20:07:10.450943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.799 [2024-10-13 20:07:10.450976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.799 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.451138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.451174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.451298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.451347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.451567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.451617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.451758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.451794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.451933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.451967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.452070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.452120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.452242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.452278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.452426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.452459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.452563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.452596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.452727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.452770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.452900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.452940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.453110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.453148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.453267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.453311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.453465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.453499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.453661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.453695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.453832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.453866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.453976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.454010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.454143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.454198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.454340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.454378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.454516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.454548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.454660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.454694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.454825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.454862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.454997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.455029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.455163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.455198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.455323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.455366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.455553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.455587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.455831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.455886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.456021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.456058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.456203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.456235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.456419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.456474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.456579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.456614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.456768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.456802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.456904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.456959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.457073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.457111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.457261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.457300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.457486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.457535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.457660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.457693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.457823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.457858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.458017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.458056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.458177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.800 [2024-10-13 20:07:10.458213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.800 qpair failed and we were unable to recover it. 00:37:20.800 [2024-10-13 20:07:10.458327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.458360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.458533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.458570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.458744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.458781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.458908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.458940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.459092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.459126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.459277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.459315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.459438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.459470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.459603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.459636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.459767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.459802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.459904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.459938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.460070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.460103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.460274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.460309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.460444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.460484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.460593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.460629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.460784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.460821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.460946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.460980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.461110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.461144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.461290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.461327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.461484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.461518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.461655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.461689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.461874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.461907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.462036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.462068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.462169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.462201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.462385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.462453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.462582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.462615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.462728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.462760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.462879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.462911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.463011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.463044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.463147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.463179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.463331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.463368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.463531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.463565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.463699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.463733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.463840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.463874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.463974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.464007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.464149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.464201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.801 qpair failed and we were unable to recover it. 00:37:20.801 [2024-10-13 20:07:10.464359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.801 [2024-10-13 20:07:10.464400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.464510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.464545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.464679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.464713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.464819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.464853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.464966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.464999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.465144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.465178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.465357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.465403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.465524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.465556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.465679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.465712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.465841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.465874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.465982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.466015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.466156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.466192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.466356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.466404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.466580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.466625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.466755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.466788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.466917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.466950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.467060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.467092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.467195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.467234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.467400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.467454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.467585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.467619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.467747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.467800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.467941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.467979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.468128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.468162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.468271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.468304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.468432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.468465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.468566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.468598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.468711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.468745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.468899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.468937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.469078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.469112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.469255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.469297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.469415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.469447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.469599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.469631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.469768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.469801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.469906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.469938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.470104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.470164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.470381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.470429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.470573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.470608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.470770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.470805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.470913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.470949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.471062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.471096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.802 [2024-10-13 20:07:10.471232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.802 [2024-10-13 20:07:10.471267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.802 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.471421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.471470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.471594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.471630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.471805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.471859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.472101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.472140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.472292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.472329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.472517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.472551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.472728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.472791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.472940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.472978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.473088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.473121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.473256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.473291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.473447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.473481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.473595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.473629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.473754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.473805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.473916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.473953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.474120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.474158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.474332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.474371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.474508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.474548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.474660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.474695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.474802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.474835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.475003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.475042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.475173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.475225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.475374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.475419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.475538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.475571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.475676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.475709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.475833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.475865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.476038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.476076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.476210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.476259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.476377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.476436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.476545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.476578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.476679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.476711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.476853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.476887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.477039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.477077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.477206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.477241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.477373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.477415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.477553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.477587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.477695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.477729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.477861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.477895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.477999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.478050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.478219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.478257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.478404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.803 [2024-10-13 20:07:10.478458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.803 qpair failed and we were unable to recover it. 00:37:20.803 [2024-10-13 20:07:10.478592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.478625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.478772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.478824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.478945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.478983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.479150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.479189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.479365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.479410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.479502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.479535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.479661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.479712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.479911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.480010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.480157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.480195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.480338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.480377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.480560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.480609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.480734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.480804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.480926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.480967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.481115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.481153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.481292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.481330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.481441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.481490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.481597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.481636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.481858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.481918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.482035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.482072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.482213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.482250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.482409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.482453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.482594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.482628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.482779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.482816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.482991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.483028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.483131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.483168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.483267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.483304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.483469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.483503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.483602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.483636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.483793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.483831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.483966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.484003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.484151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.484190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.484329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.484380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.484495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.484527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.484623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.484655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.484762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.484815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.484948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.484985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.485154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.485200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.485375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.485418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.485539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.485572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.485697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.485729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.804 qpair failed and we were unable to recover it. 00:37:20.804 [2024-10-13 20:07:10.485877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.804 [2024-10-13 20:07:10.485914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.486094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.486132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.486281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.486319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.486472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.486522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.486671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.486708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.486885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.486923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.487188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.487250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.487500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.487534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.487641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.487675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.487777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.487811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.488006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.488068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.488207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.488244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.488391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.488476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.488623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.488658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.488815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.488851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.488998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.489037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.489234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.489303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.489501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.489537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.489668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.489702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.489870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.489922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.490069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.490107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.490254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.490294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.490439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.490473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.490604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.490639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.490778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.490813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.490977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.491035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.491177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.491216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.491379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.491425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.491603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.491638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.491816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.491854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.492078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.492144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.492346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.492384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.492545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.492579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.492735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.492773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.492901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.492953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.493069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.805 [2024-10-13 20:07:10.493107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.805 qpair failed and we were unable to recover it. 00:37:20.805 [2024-10-13 20:07:10.493208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.493244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.493371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.493410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.493547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.493581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.493698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.493736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.493879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.493917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.494026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.494064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.494173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.494212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.494404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.494455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.494581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.494619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.494757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.494792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.494919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.494972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.495150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.495203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.495327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.495359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.495491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.495526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.495627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.495660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.495760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.495793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.495892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.495942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.496088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.496125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.496242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.496279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.496378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.496447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.496580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.496619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.496717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.496749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.496849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.496883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.496998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.497032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.497199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.497236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.497358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.497403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.497527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.497560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.497687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.497721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.497836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.497871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.498026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.498060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.498253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.498297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.498462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.498497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.498602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.498633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.498788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.498822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.498977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.499015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.499159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.499196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.499342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.499375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.499489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.499523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.499631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.499662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.499828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.499866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.806 qpair failed and we were unable to recover it. 00:37:20.806 [2024-10-13 20:07:10.500033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.806 [2024-10-13 20:07:10.500070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.500193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.500231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.500380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.500422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.500559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.500591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.500696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.500729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.500860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.500894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.501028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.501080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.501196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.501232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.501386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.501429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.501528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.501561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.501671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.501703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.501832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.501864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.501967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.502015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.502162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.502200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.502331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.502363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.502485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.502519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.502658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.502692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.502790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.502822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.502914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.502945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.503090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.503125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.503344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.503385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.503559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.503593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.503725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.503760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.503861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.503894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.504028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.504081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.504238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.504275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.504433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.504466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.504569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.504601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.504711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.504744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.504908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.504980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.505170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.505225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.505346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.505380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.505561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.505606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.505760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.505812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.505965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.506019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.506156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.506189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.506327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.506360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.506473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.506507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.506640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.506674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.506783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.506817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.807 [2024-10-13 20:07:10.506945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.807 [2024-10-13 20:07:10.506979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.807 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.507125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.507162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.507307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.507343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.507500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.507533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.507661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.507694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.507873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.507911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.508041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.508078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.508237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.508322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.508487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.508527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.508655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.508708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.508861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.508913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.509045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.509096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.509230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.509268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.509376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.509420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.509520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.509553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.509686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.509720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.509849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.509884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.510013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.510045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.510146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.510179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.510321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.510358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.510466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.510510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.510610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.510645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.510806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.510860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.510958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.510991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.511121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.511153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.511265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.511299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.511413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.511446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.511560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.511592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.511751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.511789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.511896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.511932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.512097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.512133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.512307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.512345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.512490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.512524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.512677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.512732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.512890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.512942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.513118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.513170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.513310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.513345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.513485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.513525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.513642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.513680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.513822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.513859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.808 [2024-10-13 20:07:10.514009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.808 [2024-10-13 20:07:10.514047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.808 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.514182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.514233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.514364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.514405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.514540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.514581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.514747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.514801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.514983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.515034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.515164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.515199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.515303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.515336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.515462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.515516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.515652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.515686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.515843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.515885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.516021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.516052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.516183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.516219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.516331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.516363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.516498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.516569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.516724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.516773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.516924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.516963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.517090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.517140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.517288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.517324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.517470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.517506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.517671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.517716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.517838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.517876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.518016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.518053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.518174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.518212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.518366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.518412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.518529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.518561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.518711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.518765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.518924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.518978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.519124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.519177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.519331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.519365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.519524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.519574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.519753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.519793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.519904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.519941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.520143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.520221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.520372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.520443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.520553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.520586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.520746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.520783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.520902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.809 [2024-10-13 20:07:10.520938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.809 qpair failed and we were unable to recover it. 00:37:20.809 [2024-10-13 20:07:10.521084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.521122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.521291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.521353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.521484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.521519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.521666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.521720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.521965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.522025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.522181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.522249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.522387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.522433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.522594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.522633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.522777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.522816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.523015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.523088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.523251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.523289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.523449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.523484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.523613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.523647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.523867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.523904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.524025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.524061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.524205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.524241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.524358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.524390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.524513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.524549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.524720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.524775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.524930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.524973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.525096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.525173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.525331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.525370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.525506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.525545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.525650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.525684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.525810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.525847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.525974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.526026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.526136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.526171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.526313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.526354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.526490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.526526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.526658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.526693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.526878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.526917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.527103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.527201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.527371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.527416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.527561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.527595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.527728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.527767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.527924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.527961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.528110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.528147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.810 [2024-10-13 20:07:10.528288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.810 [2024-10-13 20:07:10.528325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.810 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.528458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.528493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.528649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.528683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.528836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.528875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.529021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.529061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.529219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.529258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.529427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.529476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.529595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.529632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.529824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.529880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.530003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.530056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.530205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.530257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.530408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.530444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.530582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.530621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.530729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.530762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.530883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.530919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.531114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.531152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.531293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.531332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.531515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.531565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.531729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.531786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.531975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.532029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.532162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.532215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.532385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.532451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.532635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.532687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.532819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.532854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.532997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.533032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.533150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.533185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.533301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.533335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.533501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.533551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.533688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.533725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.533837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.533872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.533995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.534030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.534164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.534199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.534334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.534368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.534512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.534567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.534755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.534808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.534950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.534986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.535163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.535216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.535349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.535384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.535588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.535642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.535848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.535903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.536068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.811 [2024-10-13 20:07:10.536129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.811 qpair failed and we were unable to recover it. 00:37:20.811 [2024-10-13 20:07:10.536317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.536382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.536524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.536579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.536700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.536740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.536876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.536914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.537085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.537123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.537305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.537343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.537520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.537569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.537764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.537804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.537954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.538011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.538213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.538276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.538406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.538451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.538594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.538645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.538773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.538826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.538990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.539044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.539283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.539338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.539531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.539582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.539741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.539807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.540043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.540079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.540264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.540324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.540506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.540541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.540661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.540701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.540808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.540841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.541038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.541077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.541259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.541308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.541446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.541507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.541646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.541697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.541846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.541891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.542100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.542164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.542318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.542355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.542514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.542576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.542765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.542818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.543094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.543155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.543272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.543308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.543434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.543475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.543582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.543617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.543709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.543743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.543895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.543932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.544060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.544097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.544258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.544297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.544459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.812 [2024-10-13 20:07:10.544498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.812 qpair failed and we were unable to recover it. 00:37:20.812 [2024-10-13 20:07:10.544657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.544713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.544900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.544958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.545168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.545229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.545384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.545444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.545564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.545617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.545805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.545857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.546007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.546059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.546195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.546229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.546408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.546457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.546599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.546637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.546798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.546832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.547041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.547119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.547264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.547302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.547420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.547473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.547579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.547612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.547750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.547802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.548066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.548126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.548255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.548294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.548475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.548532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.548683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.548732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.548852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.548887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.548980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.549013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.549131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.549163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.549284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.549320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.549463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.549513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.549636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.549672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.549807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.549840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.549974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.550009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.550117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.550162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.550288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.550337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.550496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.550532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.550697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.550752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.550988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.551029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.551222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.551262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.551389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.551440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.551575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.551610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.551786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.551849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.551997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.552062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.552258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.552327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.813 qpair failed and we were unable to recover it. 00:37:20.813 [2024-10-13 20:07:10.552523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.813 [2024-10-13 20:07:10.552558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.552684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.552733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.552911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.552975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.553133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.553192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.553335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.553373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.553533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.553582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.553790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.553845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.554021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.554061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.554207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.554245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.554418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.554462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.554572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.554605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.554752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.554788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.555040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.555084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.555258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.555298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.555452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.555485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.555617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.555652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.555758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.555790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.555896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.555928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.556062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.556097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.556208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.556245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.556416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.556469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.556635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.556694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.556872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.556928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.557096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.557150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.557327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.557362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.557492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.557529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.557644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.557678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.557821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.557856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.558102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.558162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.558311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.558344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.558487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.558521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.558647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.814 [2024-10-13 20:07:10.558697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.814 qpair failed and we were unable to recover it. 00:37:20.814 [2024-10-13 20:07:10.558865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.558915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.559063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.559102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.559225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.559264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.559418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.559473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.559573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.559606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.559734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.559771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.559921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.559960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.560115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.560154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.560315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.560354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.560510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.560560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.560727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.560764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.560940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.560995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.561157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.561209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:20.815 [2024-10-13 20:07:10.561371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.815 [2024-10-13 20:07:10.561429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:20.815 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.561573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.561610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.561794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.561831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.562026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.562094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.562339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.562412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.562582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.562630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.562836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.562902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.563020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.563062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.563208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.563246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.563388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.563446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-10-13 20:07:10.563568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.097 [2024-10-13 20:07:10.563608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.563790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.563845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.563964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.564002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.564166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.564219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.564349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.564406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.564557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.564606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.564750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.564790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.564964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.565010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.565184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.565242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.565380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.565440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.565571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.565620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.565805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.565859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.566005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.566045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.566190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.566227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.566382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.566431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.566576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.566627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.566851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.566893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.567153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.567212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.567359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.567404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.567564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.567600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.567695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.567729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.567873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.567909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.568026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.568064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.568208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.568245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.568380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.568459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.568563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.568597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.568769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.568818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.568952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.569009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.569184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.569238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.569392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.569434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.569567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.569612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.569715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.569748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.569889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.569941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.570103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.570137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.570248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.570282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.570386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.570450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.570605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.570643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.570755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.570797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.570938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.570976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.571091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.571127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.571296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.571332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.571471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.571505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.571625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.571684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.571879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.571930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.572076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.572114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.572287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.098 [2024-10-13 20:07:10.572321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-10-13 20:07:10.572469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.572520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.572712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.572751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.572957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.573012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.573176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.573233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.573355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.573388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.573505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.573538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.573645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.573698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.573891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.573988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.574145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.574211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.574360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.574409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.574565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.574598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.574753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.574789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.574934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.574972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.575118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.575155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.575324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.575361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.575548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.575599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.575713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.575750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.575898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.575937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.576097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.576136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.576311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.576360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.576521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.576570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.576687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.576745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.576868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.576908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.577106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.577144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.577263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.577316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.577470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.577506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.577628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.577660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.577791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.577840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.578066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.578140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.578344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.578381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.578543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.578579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.578707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.578764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.579021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.579078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.579338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.579404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.579563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.579604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.579755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.579805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.579962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.580022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.580258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.580293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.580458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.580494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.580660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.099 [2024-10-13 20:07:10.580718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-10-13 20:07:10.580857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.580896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.581145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.581204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.581373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.581420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.581536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.581568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.581689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.581746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.581877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.581934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.582134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.582200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.582344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.582382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.582561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.582610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.582721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.582757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.582961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.583000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.583249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.583284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.583439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.583488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.583621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.583656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.583899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.583956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.584078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.584131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.584308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.584345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.584489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.584525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.584713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.584782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.584939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.584996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.585196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.585249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.585357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.585400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.585581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.585635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.585870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.585932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.586140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.586194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.586373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.586443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.586571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.586607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.586728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.586783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.586904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.586945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.587137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.587198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.587371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.587413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.587548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.587588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.587700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.587733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.587871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.587920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.588133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.588192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.588292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.588329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.588483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.588521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.588664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.588716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.588862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.588899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.589024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.100 [2024-10-13 20:07:10.589076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.100 qpair failed and we were unable to recover it. 00:37:21.100 [2024-10-13 20:07:10.589210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.589247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.589365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.589431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.589543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.589577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.589678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.589711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.589837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.589889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.590038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.590074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.590264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.590301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.590435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.590486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.590654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.590688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.590840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.590877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.590989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.591028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.591203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.591240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.591430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.591464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.591584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.591617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.591733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.591769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.591914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.591951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.592095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.592131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.592276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.592325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.592501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.592549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.592752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.592807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.592960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.593036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.593295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.593355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.593495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.593530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.593674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.593710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.593865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.593903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.594114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.594176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.594356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.594401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.594567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.594617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.594828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.594882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.595025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.595102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.595236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.595269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.595383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.595432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.595575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.595629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.101 [2024-10-13 20:07:10.595794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.101 [2024-10-13 20:07:10.595828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.101 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.595963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.596007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.596124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.596157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.596273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.596307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.596484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.596539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.596720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.596768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.596941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.596975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.597115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.597154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.597301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.597339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.597472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.597508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.597646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.597680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.597803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.597857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.598001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.598053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.598201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.598250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.598358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.598402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.598523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.598560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.598720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.598778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.598884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.598918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.599056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.599090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.599250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.599285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.599422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.599456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.599604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.599655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.599767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.599801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.599901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.599935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.600074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.600111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.600265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.600329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.600496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.600546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.600684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.600737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.600894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.600928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.601173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.601230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.601360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.601403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.601562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.601602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.601847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.601900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.602176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.602242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.602402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.602435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.602536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.102 [2024-10-13 20:07:10.602570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.102 qpair failed and we were unable to recover it. 00:37:21.102 [2024-10-13 20:07:10.602729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.602766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.602877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.602928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.603062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.603106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.603274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.603313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.603456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.603491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.603624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.603660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.603871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.603936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.604189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.604245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.604413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.604463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.604564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.604597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.604744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.604793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.604986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.605057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.605262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.605321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.605473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.605508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.605659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.605711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.605853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.605905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.606131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.606187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.606374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.606429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.606572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.606609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.606855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.606917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.607107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.607175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.607281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.607317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.607457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.607492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.607625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.607659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.607800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.607837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.607977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.608026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.608188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.608225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.608367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.608419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.608552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.608585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.608769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.608837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.608940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.608977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.609121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.609175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.609279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.609313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.609517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.609571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.609726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.609788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.609989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.610051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.610272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.610328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.610504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.610539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.610714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.610751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.610893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.103 [2024-10-13 20:07:10.610930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.103 qpair failed and we were unable to recover it. 00:37:21.103 [2024-10-13 20:07:10.611129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.611218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.611349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.611385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.611507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.611546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.611686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.611721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.611823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.611869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.611996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.612031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.612198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.612232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.612369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.612412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.612576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.612610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.612725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.612758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.612966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.613033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.613290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.613349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.613517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.613551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.613691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.613729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.613867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.613904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.614037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.614073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.614257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.614312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.614422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.614464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.614611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.614665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.614813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.614867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.615070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.615105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.615273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.615307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.615468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.615503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.615658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.615692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.615823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.615876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.615987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.616024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.616164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.616201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.616307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.616357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.616508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.616544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.616715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.616765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.616902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.616951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.617088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.617124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.617253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.617288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.617390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.617431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.617570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.617606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.617787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.617825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.617981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.618035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.618159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.618199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.618358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.618399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.618513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.104 [2024-10-13 20:07:10.618549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.104 qpair failed and we were unable to recover it. 00:37:21.104 [2024-10-13 20:07:10.618682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.618737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.618880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.618919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.619091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.619135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.619278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.619317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.619511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.619561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.619735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.619771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.619952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.620007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.620132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.620184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.620307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.620341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.620465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.620500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.620645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.620699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.620931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.620974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.621183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.621252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.621408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.621460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.621556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.621589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.621709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.621742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.621873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.621906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.622150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.622209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.622343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.622377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.622530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.622579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.622741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.622783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.622950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.623042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.623240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.623306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.623467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.623503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.623640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.623677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.623907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.623945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.624152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.624216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.624324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.624374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.624537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.624587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.624783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.624824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.624930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.624968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.625179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.625236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.625400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.625434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.625559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.625609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.625844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.625906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.626135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.626173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.626342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.626380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.626523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.626557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.626712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.626746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.626956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.627031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.105 qpair failed and we were unable to recover it. 00:37:21.105 [2024-10-13 20:07:10.627233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.105 [2024-10-13 20:07:10.627289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.627479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.627513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.627622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.627663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.627806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.627843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.628012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.628049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.628152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.628189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.628322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.628359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.628556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.628591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.628707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.628744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.628891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.628928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.629072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.629126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.629246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.629283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.629470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.629505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.629609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.629643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.629816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.629854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.630007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.630044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.630244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.630282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.630457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.630491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.630622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.630655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.630853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.630887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.631039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.631076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.631249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.631287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.631445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.631481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.631593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.631627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.631751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.631784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.631975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.632014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.632186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.632224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.632357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.632402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.632595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.632644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.632787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.632832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.632969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.633005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.633149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.633184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.633281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.633315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.633498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.633548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.633686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.633720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.106 qpair failed and we were unable to recover it. 00:37:21.106 [2024-10-13 20:07:10.633827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.106 [2024-10-13 20:07:10.633861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.633998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.634032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.634130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.634164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.634286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.634320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.634478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.634512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.634648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.634683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.634813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.634847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.634995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.635031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.635151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.635188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.635333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.635371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.635543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.635580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.635748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.635803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.635968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.636009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.636160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.636198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.636318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.636352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.636517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.636566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.636766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.636806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.637055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.637113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.637266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.637303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.637489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.637524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.637657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.637706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.637843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.637883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.638028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.638087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.638228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.638265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.638419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.638454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.638589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.638638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.638885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.638925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.639120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.639179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.639318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.639368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.639510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.639544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.639679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.639712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.639920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.639977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.640172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.640233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.640470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.640505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.640621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.640661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.640855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.640893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.641086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.641124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.641263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.641301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.641426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.641461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.641591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.641626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.107 [2024-10-13 20:07:10.641764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.107 [2024-10-13 20:07:10.641798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.107 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.641929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.641963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.642149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.642187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.642389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.642432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.642587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.642637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.642812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.642849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.642957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.643008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.643180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.643219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.643365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.643433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.643568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.643603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.643722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.643760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.643958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.643995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.644141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.644178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.644322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.644360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.644540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.644589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.644743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.644780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.645025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.645086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.645258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.645296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.645451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.645487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.645596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.645631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.645790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.645829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.646002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.646041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.646208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.646262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.646423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.646490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.646648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.646697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.646805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.646842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.647025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.647080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.647329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.647385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.647525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.647561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.647690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.647759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.648026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.648085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.648223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.648276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.648414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.648467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.648601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.648636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.648790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.648829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.648929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.648978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.649150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.649187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.649323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.649356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.649518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.649552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.649707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.649744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.649993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.108 [2024-10-13 20:07:10.650030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.108 qpair failed and we were unable to recover it. 00:37:21.108 [2024-10-13 20:07:10.650199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.650236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.650375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.650433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.650604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.650653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.650821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.650874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.651043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.651109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.651242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.651278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.651440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.651475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.651605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.651639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.651774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.651826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.651974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.652011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.652135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.652190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.652369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.652416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.652567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.652617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.652761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.652798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.652965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.653018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.653169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.653206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.653359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.653400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.653533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.653567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.653722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.653755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.653938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.653975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.654155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.654192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.654312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.654349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.654506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.654556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.654676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.654714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.654985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.655040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.655302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.655358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.655513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.655548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.655689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.655742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.655952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.656025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.656267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.656327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.656483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.656518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.656673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.656707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.656859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.656897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.657105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.657167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.657336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.657370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.657539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.657589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.657764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.657801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.658052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.658091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.658255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.658294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.658463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.109 [2024-10-13 20:07:10.658498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.109 qpair failed and we were unable to recover it. 00:37:21.109 [2024-10-13 20:07:10.658631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.658696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.658809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.658853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.659019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.659084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.659222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.659268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.659415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.659492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.659686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.659727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.659945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.659984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.660187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.660244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.660447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.660482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.660589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.660624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.660758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.660810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.660991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.661029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.661155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.661208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.661322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.661374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.661504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.661553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.661722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.661771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.661932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.661988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.662141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.662193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.662312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.662347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.662467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.662502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.662694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.662733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.662885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.662924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.663173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.663208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.663347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.663382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.663509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.663545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.663746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.663800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.663990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.664090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.664329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.664388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.664577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.664611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.664786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.664833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.664993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.665053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.665315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.665376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.665553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.665602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.665817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.665878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.666115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.666179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.110 [2024-10-13 20:07:10.666304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.110 [2024-10-13 20:07:10.666342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.110 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.666512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.666548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.666696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.666751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.666978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.667035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.667273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.667337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.667530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.667566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.667706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.667752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.667924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.667995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.668177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.668216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.668366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.668449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.668608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.668651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.668784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.668818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.668929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.668963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.669116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.669154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.669292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.669330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.669524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.669574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.669726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.669776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.669939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.669995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.670175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.670228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.670379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.670452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.670555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.670590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.670753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.670792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.671008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.671042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.671199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.671237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.671362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.671405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.671546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.671597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.671755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.671820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.672031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.672072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.672184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.672222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.672390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.672452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.672555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.672589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.672776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.672841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.673038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.673100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.673244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.673282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.673450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.673487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.673641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.673717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.673923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.673963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.674158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.674232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.674444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.674486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.674620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.674666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.674818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.111 [2024-10-13 20:07:10.674857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.111 qpair failed and we were unable to recover it. 00:37:21.111 [2024-10-13 20:07:10.674982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.675025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.675204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.675241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.675401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.675435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.675602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.675636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.675747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.675785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.675976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.676034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.676298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.676353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.676558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.676608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.676758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.676807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.677022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.677062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.677216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.677251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.677450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.677486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.677613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.677679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.677824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.677878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.678196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.678259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.678378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.678441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.678566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.678600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.678757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.678807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.678915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.678951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.679197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.679257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.679385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.679433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.679593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.679627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.679728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.679764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.679948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.680004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.680203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.680277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.680465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.680503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.680618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.680652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.680779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.680830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.680999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.681037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.681180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.681218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.681415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.681465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.681624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.681691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.681869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.681922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.682101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.682160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.682291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.682333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.682527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.682580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.682773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.682838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.683064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.683108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.683287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.683325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.683471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.112 [2024-10-13 20:07:10.683511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.112 qpair failed and we were unable to recover it. 00:37:21.112 [2024-10-13 20:07:10.683649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.683702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.683906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.683976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.684164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.684224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.684366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.684418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.684604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.684639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.684919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.684959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.685134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.685170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.685306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.685344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.685511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.685545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.685679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.685732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.685865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.685903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.686119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.686156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.686325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.686362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.686552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.686601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.686768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.686818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.687011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.687063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.687229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.687286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.687432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.687468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.687595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.687648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.687799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.687851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.687963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.687998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.688127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.688167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.688332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.688368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.688489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.688523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.688668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.688702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.688836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.688870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.689010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.689044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.689152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.689187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.689297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.689332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.689484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.689533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.689719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.689774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.689914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.689971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.690131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.690185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.690349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.690387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.690548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.690597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.690809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.690850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.691012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.691051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.691197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.691243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.691389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.691434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.691545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.691580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.113 qpair failed and we were unable to recover it. 00:37:21.113 [2024-10-13 20:07:10.691770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.113 [2024-10-13 20:07:10.691808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.691947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.692002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.692151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.692217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.692366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.692419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.692555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.692590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.692728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.692762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.692910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.692945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.693080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.693131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.693278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.693328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.693474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.693508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.693629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.693666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.693803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.693835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.693994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.694027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.694154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.694189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.694341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.694379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.694575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.694623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.694783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.694821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.694953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.694994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.695138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.695177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.695321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.695361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.695528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.695584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.695753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.695793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.695963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.696000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.696136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.696169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.696305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.696344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.696514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.696548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.696656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.696691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.696799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.696832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.696997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.697033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.697141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.697177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.697320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.697356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.697537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.697587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.697718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.697753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.697929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.697985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.698134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.698188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.698326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.698363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.698515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.698549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.698724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.698828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.114 [2024-10-13 20:07:10.699083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.114 [2024-10-13 20:07:10.699142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.114 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.699289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.699328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.699492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.699525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.699681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.699730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.699925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.699980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.700139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.700211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.700373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.700419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.700533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.700566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.700674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.700708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.700878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.700930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.701085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.701138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.701252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.701287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.701453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.701489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.701617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.701651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.701838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.701911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.702144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.702203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.702376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.702441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.702592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.702628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.702738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.702775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.702922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.702959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.703220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.703276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.703386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.703445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.703544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.703576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.703823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.703896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.704157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.704213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.704346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.704380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.704537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.704572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.704675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.704717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.704833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.704870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.705043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.705082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.705204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.705243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.705403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.705466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.705599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.705634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.705752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.705806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.705953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.705988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.706125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.706162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.706276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.706313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.706493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.115 [2024-10-13 20:07:10.706525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.115 qpair failed and we were unable to recover it. 00:37:21.115 [2024-10-13 20:07:10.706652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.706691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.706869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.706929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.707058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.707093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.707272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.707310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.707420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.707477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.707610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.707653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.707829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.707861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.707997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.708049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.708183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.708219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.708374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.708412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.708535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.708568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.708681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.708735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.708928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.708967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.709107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.709160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.709300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.709338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.709481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.709515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.709642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.709697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.709840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.709886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.710095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.710133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.710301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.710345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.710481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.710515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.710672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.710705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.710854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.710891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.711036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.711073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.711268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.711316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.711471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.711504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.711626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.711688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.711842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.711877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.712021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.712071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.712224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.712261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.712410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.712449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.712601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.712661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.712819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.712860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.713062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.713118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.713253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.713292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.713412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.713471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.713601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.713635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.713798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.713836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.714015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.714053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.714254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.714291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.116 qpair failed and we were unable to recover it. 00:37:21.116 [2024-10-13 20:07:10.714465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.116 [2024-10-13 20:07:10.714499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.714610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.714659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.714790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.714822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.714914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.714964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.715138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.715174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.715317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.715365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.715513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.715559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.715708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.715763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.715937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.715971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.716236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.716290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.716467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.716500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.716612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.716646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.716766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.716804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.716946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.716985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.717128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.717166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.717323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.717358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.717514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.717563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.717685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.717736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.717871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.717913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.718059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.718097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.718240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.718277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.718420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.718478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.718645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.718687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.718845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.718882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.719050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.719087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.719288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.719325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.719524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.719559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.719690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.719726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.719959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.719995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.720117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.720151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.720286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.720321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.720473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.720509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.720623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.720681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.720866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.720905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.721076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.721114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.721257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.721293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.721462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.721512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.721679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.721728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.721860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.721897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.722037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.722072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.117 [2024-10-13 20:07:10.722187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.117 [2024-10-13 20:07:10.722238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.117 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.722403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.722452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.722608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.722642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.722826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.722860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.723034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.723069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.723223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.723272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.723421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.723477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.723611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.723656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.723782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.723822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.723975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.724016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.724179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.724214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.724325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.724362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.724526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.724577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.724751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.724801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.724966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.725002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.725175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.725211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.725333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.725370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.725558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.725592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.725747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.725798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.725921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.725957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.726078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.726128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.726268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.726300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.726390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.726433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.726558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.726590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.726731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.726763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.726853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.726885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.727058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.727090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.727196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.727230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.727390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.727455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.727584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.727630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.727781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.727817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.727959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.727997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.728124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.728170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.728277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.728314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.728454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.728489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.728680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.728730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.728872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.728910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.729049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.729085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.729229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.729265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.729436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.729483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.729628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.118 [2024-10-13 20:07:10.729673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.118 qpair failed and we were unable to recover it. 00:37:21.118 [2024-10-13 20:07:10.729813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.729857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.729991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.730024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.730180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.730224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.730368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.730408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.730525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.730558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.730698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.730731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.730861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.730893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.731052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.731085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.731216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.731248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.731431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.731481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.731616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.731663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.731819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.731857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.731965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.732000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.732110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.732145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.732287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.732322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.732457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.732508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.732632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.732681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.732827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.732864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.733022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.733056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.733167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.733201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.733326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.733359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.733482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.733516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.733668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.733705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.733823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.733891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.734026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.734059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.734166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.734198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.734331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.734365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.734515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.734562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.734732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.734770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.734888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.734922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.735082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.735117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.735223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.735257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.735377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.735433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.735544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.735579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.735691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.735726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.119 [2024-10-13 20:07:10.735889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.119 [2024-10-13 20:07:10.735924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.119 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.736056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.736091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.736197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.736232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.736350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.736387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.736536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.736571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.736702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.736736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.736863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.736895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.737002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.737034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.737136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.737168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.737267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.737299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.737408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.737440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.737552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.737587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.737741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.737788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.737926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.737960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.738068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.738101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.738260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.738292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.738449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.738499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.738611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.738646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.738752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.738787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.738902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.738937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.739045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.739078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.739209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.739244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.739374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.739417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.739580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.739615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.739726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.739763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.739888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.739922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.740012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.740046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.740173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.740207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.740304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.740339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.740496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.740546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.740709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.740745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.740874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.740907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.741040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.741079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.741202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.741235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.741343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.741376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.741499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.741533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.741686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.741734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.741865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.741914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.742079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.742115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.742317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.742353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.742470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.120 [2024-10-13 20:07:10.742503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.120 qpair failed and we were unable to recover it. 00:37:21.120 [2024-10-13 20:07:10.742599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.742632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.742767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.742801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.742930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.742982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.743121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.743158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.743281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.743314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.743454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.743505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.743666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.743715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.743857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.743893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.744061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.744113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.744262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.744299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.744454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.744504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.744642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.744682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.744832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.744898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.745053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.745105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.745264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.745298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.745407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.745442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.745605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.745639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.745856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.745925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.746166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.746223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.746346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.746380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.746623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.746657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.746790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.746826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.747005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.747044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.747222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.747260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.747414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.747467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.747574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.747608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.747740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.747773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.747878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.747910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.748080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.748149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.748292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.748330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.748475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.748509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.748655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.748696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.748888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.748955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.749065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.749100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.749255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.749291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.749424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.749458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.749585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.749617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.749747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.749780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.749998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.750036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.750149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.121 [2024-10-13 20:07:10.750187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.121 qpair failed and we were unable to recover it. 00:37:21.121 [2024-10-13 20:07:10.750345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.750383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.750509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.750557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.750738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.750794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.751029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.751087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.751214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.751268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.751376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.751424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.751554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.751586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.751786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.751854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.752065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.752123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.752255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.752289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.752406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.752441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.752573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.752605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.752807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.752862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.753069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.753109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.753266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.753306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.753451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.753487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.753601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.753652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.753847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.753904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.754091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.754144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.754306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.754342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.754462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.754495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.754637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.754671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.754838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.754877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.754993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.755026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.755154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.755188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.755293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.755328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.755464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.755511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.755677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.755730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.755996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.756052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.756281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.756341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.756493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.756526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.756646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.756689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.756816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.756855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.756983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.757032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.757184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.757241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.757412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.757462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.757627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.757693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.757918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.757986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.758153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.758237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.758407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.758443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.122 qpair failed and we were unable to recover it. 00:37:21.122 [2024-10-13 20:07:10.758573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.122 [2024-10-13 20:07:10.758606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.758786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.758851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.758976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.759009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.759113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.759145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.759283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.759316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.759466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.759515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.759672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.759722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.759841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.759879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.760038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.760074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.760240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.760301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.760414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.760451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.760592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.760627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.760784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.760836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.761038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.761077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.761201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.761249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.761410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.761446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.761576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.761609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.761738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.761792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.761909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.761946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.762077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.762130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.762269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.762313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.762487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.762535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.762676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.762714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.762844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.762877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.763031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.763070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.763267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.763305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.763470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.763508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.763691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.763731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.763913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.763948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.764196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.764261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.764380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.764441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.764572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.764613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.764752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.764805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.764965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.765016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.765162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.765199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.765368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.765411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.123 qpair failed and we were unable to recover it. 00:37:21.123 [2024-10-13 20:07:10.765558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.123 [2024-10-13 20:07:10.765591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.765725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.765759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.765863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.765896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.766022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.766073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.766215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.766253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.766378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.766459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.766593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.766643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.766813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.766853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.767038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.767078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.767227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.767266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.767460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.767496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.767654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.767689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.767823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.767859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.768049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.768088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.768211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.768264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.768406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.768439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.768545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.768578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.768677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.768710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.768911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.768944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.769190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.769223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.769407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.769444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.769580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.769615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.769784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.769819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.769921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.769973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.770155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.770216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.770369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.770412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.770525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.770567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.770707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.770742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.770871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.770905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.771049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.771084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.771231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.771275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.771464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.771499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.771615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.771650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.771815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.771850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.772035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.772073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.772205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.772263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.772442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.772478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.772580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.772624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.772771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.772825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.772978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.773015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.124 qpair failed and we were unable to recover it. 00:37:21.124 [2024-10-13 20:07:10.773148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.124 [2024-10-13 20:07:10.773201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.773357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.773419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.773553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.773586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.773724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.773763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.773945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.773980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.774103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.774138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.774264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.774299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.774406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.774442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.774596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.774646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.774796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.774832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.774969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.775004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.775135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.775169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.775280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.775330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.775488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.775524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.775721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.775776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.775968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.776005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.776109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.776160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.776276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.776313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.776493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.776527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.776644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.776677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.776817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.776851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.776957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.776990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.777137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.777189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.777306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.777344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.777507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.777543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.777648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.777683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.777835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.777869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.777998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.778032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.778174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.778219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.778385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.778430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.778559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.778595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.778708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.778771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.778905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.778945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.779069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.779102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.779263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.779315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.779463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.779501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.779637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.779673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.779804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.779858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.780004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.780043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.780176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.125 [2024-10-13 20:07:10.780210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.125 qpair failed and we were unable to recover it. 00:37:21.125 [2024-10-13 20:07:10.780324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.780359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.780497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.780548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.780694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.780730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.780856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.780910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.781091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.781127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.781234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.781276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.781422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.781459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.781588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.781623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.781758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.781793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.781907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.781960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.782155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.782232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.782370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.782423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.782538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.782572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.782676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.782728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.782886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.782920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.783050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.783101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.783247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.783286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.783420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.783454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.783607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.783656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.783893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.783955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.784113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.784148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.784279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.784333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.784516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.784566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.784706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.784743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.784869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.784906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.785042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.785076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.785238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.785276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.785391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.785454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.785562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.785595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.785722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.785754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.785859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.785891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.785993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.786026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.786123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.786156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.786289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.786322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.786510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.786559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.786710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.786752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.786858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.786894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.787057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.787091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.787226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.787260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.787362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.787425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.126 qpair failed and we were unable to recover it. 00:37:21.126 [2024-10-13 20:07:10.787607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.126 [2024-10-13 20:07:10.787642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.787778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.787812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.787945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.787998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.788138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.788175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.788326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.788361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.788495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.788528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.788683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.788719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.788888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.788922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.789056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.789108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.789239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.789277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.789429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.789464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.789564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.789596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.789751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.789810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.789969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.790003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.790138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.790171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.790323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.790375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.790521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.790553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.790652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.790686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.790805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.790839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.790941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.790975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.791076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.791110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.791288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.791327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.791458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.791493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.791602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.791636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.791769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.791804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.791934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.791968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.792113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.792151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.792307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.792341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.792453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.792486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.792631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.792681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.792855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.792909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.793039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.793075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.793179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.793211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.793344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.793385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.793506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.793538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.793715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.127 [2024-10-13 20:07:10.793755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.127 qpair failed and we were unable to recover it. 00:37:21.127 [2024-10-13 20:07:10.793910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.793975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.794130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.794163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.794292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.794342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.794521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.794572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.794734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.794773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.795006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.795067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.795269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.795328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.795476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.795510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.795624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.795657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.795763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.795797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.795930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.795964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.796105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.796166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.796401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.796454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.796570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.796603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.796761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.796796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.796970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.797036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.797187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.797221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.797370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.797418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.797567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.797616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.797731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.797767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.797946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.797987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.798241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.798300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.798453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.798487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.798630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.798666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.798781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.798813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.798912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.798945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.799075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.799110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.799262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.799299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.799478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.799513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.799647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.799680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.799783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.799817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.799950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.799984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.800085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.800121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.800280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.800317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.800449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.800484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.800615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.800648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.800777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.800815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.800928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.800961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.801093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.801128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.801262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.128 [2024-10-13 20:07:10.801306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.128 qpair failed and we were unable to recover it. 00:37:21.128 [2024-10-13 20:07:10.801484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.801519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.801694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.801733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.801894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.801933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.802048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.802091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.802200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.802232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.802345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.802380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.802508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.802540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.802669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.802702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.802838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.802875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.803031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.803066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.803239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.803278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.803450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.803499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.803609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.803643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.803831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.803868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.804040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.804077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.804227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.804260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.804412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.804462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.804591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.804640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.804840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.804876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.804987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.805021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.805188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.805224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.805387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.805426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.805542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.805577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.805750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.805803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.805968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.806005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.806148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.806184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.806353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.806392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.806535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.806570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.806748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.806815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.807055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.807118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.807325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.807361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.807468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.807503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.807605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.807637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.807760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.807793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.807924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.807958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.808086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.808121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.808249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.808283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.808388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.808430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.808568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.808601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.808761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.129 [2024-10-13 20:07:10.808801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.129 qpair failed and we were unable to recover it. 00:37:21.129 [2024-10-13 20:07:10.808908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.808962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.809119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.809164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.809321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.809367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.809522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.809556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.809730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.809786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.809970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.810007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.810119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.810153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.810252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.810286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.810389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.810432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.810567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.810601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.810819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.810875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.811009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.811041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.811202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.811254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.811438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.811491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.811589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.811621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.811749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.811781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.811943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.811995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.812138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.812172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.812308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.812342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.812478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.812512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.812616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.812650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.812792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.812845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.812998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.813035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.813183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.813217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.813427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.813480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.813591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.813622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.813756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.813790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.813893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.813926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.814038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.814070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.814195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.814227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.814347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.814430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.814564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.814601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.814778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.814816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.814979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.815044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.815233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.815293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.815444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.815478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.815599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.815634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.815793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.815831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.815964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.816000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.816166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.816206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.130 qpair failed and we were unable to recover it. 00:37:21.130 [2024-10-13 20:07:10.816378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.130 [2024-10-13 20:07:10.816446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.816578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.816613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.816722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.816773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.816970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.817025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.817149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.817181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.817342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.817377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.817517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.817565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.817687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.817723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.817850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.817883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.817985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.818017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.818176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.818211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.818359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.818398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.818539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.818589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.818705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.818740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.818878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.818913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.819161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.819217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.819379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.819428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.819564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.819599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.819732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.819767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.819899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.819934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.820098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.820138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.820280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.820319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.820480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.820513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.820642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.820693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.820845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.820884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.821067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.821101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.821238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.821272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.821477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.821514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.821624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.821657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.821784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.821816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.821942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.821978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.822103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.822144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.822279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.822313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.822481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.822515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.822647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.822685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.822821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.822855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.822955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.822990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.823122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.823156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.823281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.823334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.823458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.823514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.131 [2024-10-13 20:07:10.823625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.131 [2024-10-13 20:07:10.823659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.131 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.823786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.823819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.824023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.824056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.824193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.824228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.824421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.824490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.824651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.824716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.824915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.824952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.825062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.825096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.825223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.825258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.825355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.825390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.825513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.825548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.825705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.825744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.825887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.825922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.826062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.826107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.826270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.826305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.826451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.826483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.826581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.826614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.826768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.826806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.826959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.826993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.827149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.827190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.827378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.827433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.827585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.827619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.827726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.827759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.827890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.827925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.828055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.828091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.828200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.828252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.828404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.828459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.828567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.828602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.828742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.828777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.828892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.828945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.829082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.829117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.829247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.829302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.829443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.829510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.829615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.829651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.829811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.829864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.830008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.132 [2024-10-13 20:07:10.830045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.132 qpair failed and we were unable to recover it. 00:37:21.132 [2024-10-13 20:07:10.830174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.830207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.830306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.830339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.830549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.830585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.830689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.830723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.830864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.830900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.831152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.831228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.831375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.831444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.831601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.831649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.831781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.831822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.831985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.832021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.832150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.832201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.832405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.832480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.832659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.832696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.832899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.832957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.833171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.833239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.833399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.833433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.833592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.833641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.833859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.833915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.834071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.834105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.834243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.834277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.834409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.834458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.834625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.834662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.834819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.834859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.835055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.835095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.835294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.835329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.835423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.835458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.835568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.835603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.835757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.835791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.835889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.835923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.836056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.836090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.836201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.836239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.836433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.836484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.836637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.836686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.836889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.836925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.837153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.837216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.837335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.837371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.837523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.837568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.837757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.837835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.838075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.838115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.838250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.838285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.133 qpair failed and we were unable to recover it. 00:37:21.133 [2024-10-13 20:07:10.838432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.133 [2024-10-13 20:07:10.838468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.838603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.838637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.838800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.838833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.838972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.839007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.839138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.839172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.839314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.839348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.839482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.839531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.839721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.839778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.839955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.839991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.840119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.840154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.840351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.840391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.840559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.840594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.840767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.840806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.840922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.840964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.841128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.841164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.841294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.841331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.841445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.841480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.841663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.841713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.841878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.841932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.842099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.842174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.842350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.842389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.842527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.842568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.842693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.842727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.842838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.842871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.843007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.843042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.843186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.843234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.843388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.843465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.843567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.843600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.843714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.843750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.843849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.843884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.844024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.844067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.844280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.844341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.844524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.844589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.844752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.844793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.844945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.844984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.845105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.845142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.845269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.845324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.845462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.845499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.845653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.845690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.845894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.845952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.134 [2024-10-13 20:07:10.846086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.134 [2024-10-13 20:07:10.846140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.134 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.846310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.846356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.846534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.846585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.846799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.846869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.847165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.847224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.847415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.847475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.847623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.847656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.847806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.847844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.848041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.848080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.848241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.848280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.848483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.848533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.848664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.848711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.848948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.849004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.849116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.849166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.849311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.849344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.849505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.849540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.849667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.849720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.849883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.849921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.850059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.850097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.850245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.850289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.850506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.850556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.850675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.850713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.850873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.850926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.851115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.851169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.851293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.851327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.851478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.851515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.851643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.851713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.851876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.851947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.852079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.852142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.852332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.852371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.852543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.852587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.852764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.852830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.852974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.853033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.853174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.853236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.853379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.853419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.853571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.853629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.853780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.853832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.853957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.853992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.854144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.854204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.854370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.854425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.135 [2024-10-13 20:07:10.854574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.135 [2024-10-13 20:07:10.854624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.135 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.854735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.854768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.854906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.854941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.855075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.855110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.855258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.855293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.855440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.855490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.855649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.855699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.855837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.855873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.856037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.856094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.856205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.856242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.856361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.856407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.856541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.856577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.856725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.856775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.856893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.856929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.857025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.857059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.857168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.857202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.857357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.857392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.857507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.857540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.857660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.857696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.857845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.857899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.858034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.858070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.858205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.858242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.858371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.858412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.858515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.858549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.858655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.858690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.858802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.858836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.858955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.859010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.859160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.859211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.859348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.859381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.859491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.859524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.859644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.859685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.859825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.859860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.859976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.860011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.860195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.860232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.860353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.860390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.860578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.136 [2024-10-13 20:07:10.860612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.136 qpair failed and we were unable to recover it. 00:37:21.136 [2024-10-13 20:07:10.860745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.860783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.861031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.861087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.861228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.861267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.861419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.861455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.861564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.861597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.861746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.861801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.861977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.862029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.862145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.862205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.862337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.862386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.862515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.862550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.862687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.862723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.862950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.862987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.863104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.863142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.863286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.863324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.863452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.863486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.863622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.863656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.863788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.863825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.863992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.864030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.864152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.864189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.864315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.864357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.864505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.864539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.864681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.864714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.864865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.864902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.865068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.865105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.865272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.865309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.865470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.865504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.865636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.865669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.865844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.865881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.866025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.866063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.866227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.866264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.866409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.866463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.866614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.866649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.866799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.866848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.866968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.867007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.867156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.867199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.867341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.867374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.867543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.867578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.867728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.867766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.867936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.867973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.868079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.868114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.137 qpair failed and we were unable to recover it. 00:37:21.137 [2024-10-13 20:07:10.868226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.137 [2024-10-13 20:07:10.868262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.868448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.868503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.868646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.868694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.868854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.868895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.869068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.869106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.869284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.869322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.869451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.869484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.869615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.869650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.869811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.869845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.869956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.869990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.870150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.870201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.870401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.870435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.870585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.870635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.870803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.870845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.871038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.871094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.871231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.871284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.871438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.871493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.871626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.871662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.871813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.871864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.872128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.872184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.872358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.872400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.872514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.872547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.872669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.872719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.872876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.872918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.873098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.873170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.873317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.873356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.873510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.873560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.873669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.873725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.873931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.873990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.874158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.874213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.874342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.874376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.874536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.874574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.874672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.874724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.874912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.874951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.875198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.875264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.875413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.875468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.875626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.875660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.875791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.875848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.876004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.876061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.876194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.876246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.138 [2024-10-13 20:07:10.876402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.138 [2024-10-13 20:07:10.876436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.138 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.876570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.876605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.876747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.876781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.876900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.876936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.877079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.877118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.877253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.877291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.877408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.877456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.877616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.877649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.877864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.877902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.878074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.878111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.878231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.878267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.878439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.878488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.878614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.878665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.878824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.878865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.878971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.879010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.879217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.879257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.879449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.879485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.879632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.879667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.879833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.879868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.880020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.880059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.880230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.880269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.880441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.880476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.880583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.880616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.880765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.880803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.881003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.881042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.881156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.881192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.881363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.881408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.881602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.881651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.881811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.881861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.882024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.882064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.882183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.882222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.882391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.882432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.882558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.882593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.882720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.882759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.882889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.882948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.883103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.883141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.883282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.883319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.883495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.883544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.883675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.883726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.883841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.883897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.884045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.139 [2024-10-13 20:07:10.884085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.139 qpair failed and we were unable to recover it. 00:37:21.139 [2024-10-13 20:07:10.884207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.884245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.884419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.884474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.884591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.884626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.884796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.884861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.885055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.885118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.885265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.885302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.885488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.885523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.885636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.885687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.885804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.885841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.885967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.886018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.886141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.886179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.886287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.886323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.886475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.886537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.886731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.886780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.886916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.886972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.887075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.887111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.887236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.887274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.887437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.887472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.887610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.887645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.887860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.887897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.888051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.888088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.888233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.888272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.888403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.888437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.888615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.888681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.140 [2024-10-13 20:07:10.888889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.140 [2024-10-13 20:07:10.888948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.140 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.891521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.891575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.891737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.891787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.892016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.892070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.892324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.892384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.892553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.892587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.892763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.892801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.892980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.893048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.893313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.893373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.893555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.893596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.893716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.893752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.893962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.894018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.894222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.894285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.894444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.894479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.894592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.894641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.894818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.894872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.895006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.895063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.895334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.895403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.895571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.895606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.895786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.895858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.896115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.896187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.896304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.896340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.896478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.896511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.896613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.896645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.896751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.896784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.896961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.896997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.897134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.897179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.897333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.897369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.897547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.425 [2024-10-13 20:07:10.897596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.425 qpair failed and we were unable to recover it. 00:37:21.425 [2024-10-13 20:07:10.897741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.897794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.897930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.897972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.898147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.898196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.898353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.898388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.898504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.898539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.898647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.898699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.898903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.898966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.899184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.899241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.899421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.899476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.899606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.899640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.899786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.899823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.899968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.900005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.900141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.900192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.900303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.900339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.900473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.900507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.900638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.900671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.900778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.900811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.900947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.900980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.901198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.901236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.901374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.901418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.901577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.901616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.901775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.901808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.901916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.901968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.902089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.902140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.902264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.902302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.902412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.902465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.902625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.902691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.902827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.902883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.903008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.903044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.903191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.903230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.903373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.903418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.903569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.903619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.903760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.903800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.903962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.904002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.904133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.904173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.904325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.904365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.904558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.904607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.904734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.904776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.904888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.904921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.905029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.426 [2024-10-13 20:07:10.905061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.426 qpair failed and we were unable to recover it. 00:37:21.426 [2024-10-13 20:07:10.905219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.905256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.905363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.905410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.905561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.905597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.905772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.905812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.905968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.906023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.906192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.906245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.906386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.906431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.906580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.906631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.906810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.906865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.907078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.907140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.907318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.907352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.907464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.907497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.907655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.907689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.907840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.907893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.908106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.908167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.908290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.908326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.908496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.908532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.908665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.908719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.908867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.908905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.909104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.909142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.909282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.909325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.909455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.909491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.909601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.909635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.909768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.909800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.909902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.909934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.910078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.910152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.910330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.910384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.910536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.910574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.910730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.910771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.910938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.910976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.911182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.911240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.911385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.911430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.911581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.911631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.911786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.911842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.911977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.912011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.912192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.912249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.912386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.912427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.912567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.912602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.912765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.912800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.427 [2024-10-13 20:07:10.912931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.427 [2024-10-13 20:07:10.912965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.427 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.913108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.913143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.913276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.913312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.913445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.913478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.913633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.913666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.913768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.913804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.914051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.914110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.914273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.914308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.914417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.914451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.914554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.914586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.914712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.914761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.914916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.914957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.915114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.915168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.915322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.915361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.915502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.915540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.915662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.915703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.915964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.916024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.916138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.916176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.916352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.916387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.916534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.916569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.916719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.916757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.916936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.916981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.917185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.917250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.917409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.917463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.917563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.917597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.917729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.917776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.917972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.918040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.918164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.918214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.918324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.918361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.918548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.918582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.918683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.918736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.918934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.918995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.919187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.919247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.919431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.919466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.919597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.919632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.919740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.919773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.919925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.919960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.920207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.920267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.920411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.920448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.920601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.920635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.920763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.428 [2024-10-13 20:07:10.920816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.428 qpair failed and we were unable to recover it. 00:37:21.428 [2024-10-13 20:07:10.920965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.921003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.921208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.921245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.921432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.921466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.921609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.921659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.921852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.921890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.922066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.922130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.922302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.922348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.922505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.922541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.922709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.922760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.923062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.923101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.923228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.923277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.923429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.923481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.923621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.923655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.923824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.923879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.924051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.924089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.924202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.924239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.924383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.924423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.924577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.924625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.924791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.924831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.925031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.925071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.925212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.925258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.925409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.925462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.925584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.925619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.925724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.925758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.925894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.925929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.926152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.926189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.926311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.926349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.926495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.926559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.926703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.926740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.926891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.926929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.927109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.927168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.927346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.927380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.927532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.927568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.927729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.429 [2024-10-13 20:07:10.927768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.429 qpair failed and we were unable to recover it. 00:37:21.429 [2024-10-13 20:07:10.927976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.928042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.928177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.928248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.928368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.928410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.928536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.928568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.928667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.928700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.928854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.928886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.929105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.929142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.929259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.929295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.929457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.929493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.929592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.929625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.929754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.929786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.930013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.930077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.930269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.930307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.930467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.930502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.930597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.930631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.930730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.930765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.930862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.930895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.931076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.931114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.931241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.931294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.931450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.931484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.931611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.931645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.931753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.931786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.931946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.931980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.932178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.932218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.932358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.932429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.932586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.932636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.932829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.932868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.933144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.933204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.933319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.933356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.933499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.933534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.933671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.933705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.933833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.933886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.934057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.934124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.934273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.934312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.934443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.934478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.934601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.934636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.934738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.934772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.934871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.934904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.935086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.935132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.935296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.935334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.430 qpair failed and we were unable to recover it. 00:37:21.430 [2024-10-13 20:07:10.935493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.430 [2024-10-13 20:07:10.935527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.935652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.935704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.935849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.935884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.936040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.936092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.936204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.936240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.936414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.936448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.936552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.936585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.936761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.936827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.936995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.937032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.937171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.937225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.937417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.937453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.937557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.937590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.937712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.937762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.937910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.937953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.938139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.938185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.938295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.938345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.938523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.938584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.938768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.938817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.938967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.939004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.939219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.939278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.939425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.939459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.939586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.939635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.939804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.939857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.940009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.940044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.940150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.940182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.940308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.940348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.940508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.940545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.940684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.940737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.940926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.940995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.941116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.941176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.941299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.941347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.941539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.941589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.941768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.941812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.942068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.942127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.942283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.942320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.942474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.942508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.942633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.942668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.942794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.942832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.942960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.942992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.943120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.943154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.431 [2024-10-13 20:07:10.943339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.431 [2024-10-13 20:07:10.943402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.431 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.943547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.943581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.943681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.943714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.943906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.943945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.944125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.944160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.944282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.944318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.944457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.944493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.944597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.944631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.944770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.944803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.945037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.945099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.945225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.945257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.945361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.945402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.945518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.945552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.945680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.945720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.945825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.945878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.946074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.946149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.946324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.946358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.946476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.946509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.946608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.946640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.946801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.946833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.946968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.947021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.947175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.947214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.947347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.947381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.947530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.947564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.947738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.947780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.947916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.947958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.948117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.948185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.948343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.948381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.948522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.948556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.948657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.948690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.948844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.948877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.949020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.949055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.949182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.949239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.949408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.949469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.949569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.949602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.949729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.949762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.949973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.950047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.950227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.950261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.950369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.950431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.950585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.950619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.950761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.432 [2024-10-13 20:07:10.950797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.432 qpair failed and we were unable to recover it. 00:37:21.432 [2024-10-13 20:07:10.950977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.951014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.951136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.951174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.951310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.951344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.951514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.951548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.951651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.951703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.951835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.951869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.952030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.952064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.952194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.952228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.952406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.952440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.952599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.952633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.952809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.952847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.952994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.953027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.953133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.953173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.953325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.953364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.953527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.953561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.953686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.953739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.953913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.953950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.954101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.954134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.954233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.954280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.954449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.954484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.954585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.954617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.954738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.954771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.954953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.954990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.955168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.955202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.955301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.955332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.955441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.955474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.955615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.955649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.955825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.955862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.956079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.956117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.956311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.956348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.956480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.956514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.956609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.956641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.956735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.956767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.956927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.956962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.433 qpair failed and we were unable to recover it. 00:37:21.433 [2024-10-13 20:07:10.957167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.433 [2024-10-13 20:07:10.957201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.957343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.957375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.957481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.957513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.957610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.957642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.957803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.957834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.958047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.958084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.958249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.958299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.958536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.958569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.958722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.958772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.958914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.958951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.959109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.959144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.959274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.959324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.959466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.959503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.959605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.959639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.959770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.959804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.959962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.959999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.960132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.960164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.960309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.960343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.960507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.960573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.960763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.960800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.960929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.960998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.961207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.961261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.961507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.961545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.961680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.961726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.961863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.961902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.962059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.962094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.962214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.962248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.962417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.962467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.962610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.962647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.962763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.962799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.963062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.963121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.963254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.963287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.963422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.963457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.963613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.963647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.963784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.963818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.963929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.963963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.964123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.964160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.964278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.964310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.964457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.964508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.964662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.964729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.434 qpair failed and we were unable to recover it. 00:37:21.434 [2024-10-13 20:07:10.964897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.434 [2024-10-13 20:07:10.964934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.965064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.965116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.965271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.965309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.965466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.965501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.965631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.965666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.965798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.965836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.965993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.966027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.966140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.966172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.966321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.966373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.966506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.966538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.966669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.966720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.966831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.966867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.967015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.967049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.967221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.967259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.967411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.967463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.967596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.967629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.967731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.967763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.967899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.967931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.968077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.968119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.968239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.968305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.968481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.968518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.968654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.968687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.968845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.968877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.968988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.969021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.969161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.969194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.969306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.969341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.969504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.969539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.969646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.969678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.969851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.969888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.970005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.970040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.970197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.970230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.970336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.970370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.970588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.970622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.970727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.970760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.970941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.971017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.971174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.971210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.971374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.971415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.971553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.971586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.971731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.971782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.971936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.971969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.435 [2024-10-13 20:07:10.972085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.435 [2024-10-13 20:07:10.972139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.435 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.972318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.972354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.972505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.972538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.972674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.972709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.972862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.972898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.973034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.973066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.973172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.973225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.973373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.973418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.973538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.973569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.973808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.973845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.974019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.974057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.974184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.974216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.974389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.974432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.974552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.974584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.974678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.974710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.974841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.974874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.974998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.975036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.975171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.975204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.975311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.975349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.975478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.975529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.975666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.975703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.975883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.975924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.976067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.976105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.976236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.976270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.976406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.976441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.976572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.976606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.976739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.976772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.976911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.976944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.977084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.977117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.977254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.977286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.977419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.977468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.977630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.977681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.977829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.977867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.978042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.978080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.978232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.978272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.978409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.978457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.978576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.978610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.978754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.978789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.978916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.978951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.979136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.979176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.436 [2024-10-13 20:07:10.979318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.436 [2024-10-13 20:07:10.979357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.436 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.979516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.979549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.979658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.979689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.979814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.979849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.979980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.980011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.980203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.980238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.980428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.980479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.980614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.980648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.980750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.980783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.980990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.981036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.981195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.981231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.981380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.981425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.981579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.981614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.981710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.981743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.981863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.981898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.982101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.982139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.982309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.982344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.982489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.982540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.982669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.982723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.982835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.982869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.982964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.983004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.983132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.983242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.983407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.983441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.983538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.983570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.983741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.983778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.983955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.983989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.984149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.984199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.984328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.984365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.984533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.984566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.984733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.984770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.984980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.985030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.985236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.985273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.985469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.985504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.985681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.985731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.985875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.985912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.986049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.986085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.986248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.986286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.986442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.986479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.986621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.986656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.986778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.986815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.986932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.437 [2024-10-13 20:07:10.986966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.437 qpair failed and we were unable to recover it. 00:37:21.437 [2024-10-13 20:07:10.987093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.987127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.987343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.987409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.987549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.987585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.987719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.987772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.987884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.987922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.988047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.988082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.988244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.988300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.988473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.988508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.988639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.988672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.988806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.988839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.988945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.988979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.989110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.989142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.989339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.989404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.989562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.989600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.989703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.989748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.989858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.989892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.990046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.990085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.990207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.990246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.990390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.990432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.990555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.990603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.990757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.990806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.991028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.991087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.991296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.991331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.991469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.991506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.991625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.991659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.991789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.991844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.992014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.992089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.992213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.992248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.992365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.992406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.992534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.992572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.992682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.992716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.992845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.992878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.992978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.993010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.993133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.993165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.993277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.993325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.993459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.438 [2024-10-13 20:07:10.993498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.438 qpair failed and we were unable to recover it. 00:37:21.438 [2024-10-13 20:07:10.993608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.993646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.993851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.993902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.994002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.994035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.994153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.994187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.994328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.994363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.994531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.994584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.994764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.994802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.995054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.995112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.995264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.995303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.995424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.995477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.995602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.995638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.995774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.995811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.995983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.996020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.996157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.996195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.996333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.996370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.996550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.996600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.996799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.996853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.997005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.997058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.997254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.997293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.997434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.997485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.997614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.997647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.997905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.997968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.998178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.998248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.998425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.998478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.998633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.998682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.998827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.998882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.999090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.999163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.999292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.999346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.999492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.999525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.999649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.999683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:10.999815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:10.999868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.000007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.000044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.000239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.000276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.000418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.000470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.000599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.000650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.000854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.000896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.001018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.001057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.001206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.001245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.001392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.001433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.001554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.439 [2024-10-13 20:07:11.001604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.439 qpair failed and we were unable to recover it. 00:37:21.439 [2024-10-13 20:07:11.001772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.001812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.002037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.002075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.002184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.002219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.002340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.002375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.002531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.002565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.002674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.002723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.002832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.002869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.002977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.003013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.003165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.003206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.003352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.003390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.003539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.003573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.003700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.003762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.003911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.003949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.004073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.004126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.004302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.004340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.004486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.004521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.004630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.004664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.004800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.004851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.004969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.005005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.005213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.005252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.005410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.005465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.005616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.005671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.005848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.005884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.005992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.006047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.006230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.006285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.006477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.006512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.006615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.006647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.006778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.006815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.006976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.007014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.007151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.007189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.007350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.007389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.007564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.007599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.007704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.007760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.007876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.007912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.008021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.008071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.008237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.008277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.008474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.008525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.008671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.008710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.008869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.008904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.009032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.440 [2024-10-13 20:07:11.009077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.440 qpair failed and we were unable to recover it. 00:37:21.440 [2024-10-13 20:07:11.009280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.009320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.009473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.009509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.009627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.009667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.009774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.009808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.009918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.009953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.010180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.010241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.010391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.010430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.010562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.010595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.010829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.010891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.011149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.011210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.011386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.011456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.011592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.011627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.011760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.011794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.011930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.011980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.012186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.012251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.012410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.012444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.012547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.012580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.012701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.012739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.012895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.012929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.013040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.013091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.013238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.013275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.013402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.013444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.013591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.013641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.013767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.013807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.013926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.013965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.014114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.014153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.014320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.014376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.014563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.014612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.014783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.014818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.015038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.015097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.015251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.015300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.015468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.015503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.015605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.015636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.015766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.015802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.015945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.015996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.016146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.016185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.016311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.016350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.016510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.016559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.016695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.016732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.016891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.016944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.441 [2024-10-13 20:07:11.017086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.441 [2024-10-13 20:07:11.017150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.441 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.017310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.017359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.017485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.017520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.017649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.017703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.017824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.017875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.018034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.018088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.018239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.018280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.018474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.018524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.018656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.018706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.018844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.018897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.019034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.019072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.019212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.019250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.019400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.019439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.019590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.019624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.019762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.019813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.019932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.019970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.020094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.020129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.020239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.020275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.020410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.020444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.020546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.020578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.020708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.020759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.020905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.020949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.021065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.021101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.021213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.021249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.021404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.021439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.021565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.021597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.021761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.021828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.022028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.022082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.022196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.022231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.022338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.022374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.022513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.022567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.022752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.022807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.022992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.023054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.023254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.023313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.023463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.023497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.442 [2024-10-13 20:07:11.023608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.442 [2024-10-13 20:07:11.023642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.442 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.023788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.023826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.024076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.024142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.024287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.024335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.024478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.024512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.024646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.024678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.024806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.024859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.025004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.025041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.025208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.025256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.025439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.025479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.025610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.025659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.025845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.025885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.026003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.026042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.026192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.026233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.026420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.026470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.026615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.026652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.026753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.026806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.027068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.027129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.027252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.027301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.027460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.027510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.027687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.027742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.027930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.027991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.028103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.028140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.028308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.028346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.028501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.028536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.028688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.028738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.029006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.029068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.029336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.029408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.029543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.029578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.029797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.029857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.030021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.030103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.030247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.030285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.030473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.030508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.030617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.030651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.030796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.030848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.031020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.031077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.031246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.031288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.031445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.031497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.031630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.031663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.031790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.031822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.443 qpair failed and we were unable to recover it. 00:37:21.443 [2024-10-13 20:07:11.031973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.443 [2024-10-13 20:07:11.032010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.032116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.032154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.032352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.032415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.032548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.032582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.032701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.032751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.032883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.032924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.033091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.033129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.033281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.033319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.033477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.033512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.033625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.033658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.033825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.033859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.034021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.034059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.034252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.034290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.034411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.034466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.034567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.034599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.034709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.034760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.034902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.034940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.035112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.035150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.035302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.035339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.035477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.035512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.035644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.035714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.035926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.035980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.036199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.036239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.036418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.036471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.036582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.036614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.036785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.036849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.037089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.037127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.037270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.037306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.037421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.037471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.037640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.037691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.037848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.037882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.038040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.038072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.038200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.038232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.038361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.038405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.038594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.038626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.038852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.038910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.039083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.039120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.039265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.039301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.039478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.039513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.039617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.039649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.444 [2024-10-13 20:07:11.039812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.444 [2024-10-13 20:07:11.039850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.444 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.039971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.040007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.040230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.040269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.040413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.040461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.040594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.040628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.040786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.040824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.040986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.041031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.041203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.041239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.041355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.041390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.041555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.041588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.041727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.041760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.041859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.041908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.042060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.042097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.042294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.042337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.042481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.042515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.042644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.042694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.042834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.042871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.043015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.043057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.043282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.043318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.043467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.043498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.043608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.043639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.043792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.043829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.043986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.044018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.044171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.044209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.044345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.044383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.044510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.044553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.044675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.044707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.044877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.044930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.045078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.045114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.045269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.045307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.045446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.045491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.045613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.045654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.045822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.045859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.046009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.046047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.046249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.046286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.046435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.046485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.046595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.046626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.046804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.046852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.047007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.047044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.047185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.047222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.047383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.445 [2024-10-13 20:07:11.047423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.445 qpair failed and we were unable to recover it. 00:37:21.445 [2024-10-13 20:07:11.047558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.047591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.047695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.047728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.047861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.047893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.048014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.048049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.048225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.048263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.048386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.048425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.048563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.048597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.048742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.048780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.048953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.048991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.049190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.049226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.049365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.049414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.049569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.049603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.049766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.049808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.049961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.049993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.050153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.050202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.050349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.050387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.050521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.050560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.050665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.050696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.050826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.050860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.050956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.050987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.051112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.051144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.051273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.051305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.051434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.051466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.051598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.051648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.051834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.051888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.052054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.052090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.052202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.052237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.052374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.052416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.052511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.052545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.052653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.052686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.052823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.052875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.053047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.053087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.053262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.053310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.053453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.053487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.053644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.053675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.053859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.446 [2024-10-13 20:07:11.053896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.446 qpair failed and we were unable to recover it. 00:37:21.446 [2024-10-13 20:07:11.054028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.054065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.054185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.054216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.054323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.054355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.054612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.054648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.054822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.054858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.054987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.055021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.055191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.055225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.055423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.055463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.055611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.055646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.055771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.055806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.055931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.055982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.056150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.056186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.056323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.056361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.056524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.056561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.056702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.056736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.056848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.056881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.057013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.057057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.057258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.057295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.057440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.057491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.057622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.057655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.057836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.057873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.058003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.058036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.058161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.058195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.058351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.058388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.058549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.058584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.058713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.058747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.058915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.058953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.059131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.059169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.059308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.059345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.059494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.059527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.059707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.059744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.059861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.059897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.060036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.060085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.060211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.060244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.060374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.060416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.060551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.060585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.060708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.060744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.060899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.060932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.061090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.061122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.061280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.447 [2024-10-13 20:07:11.061330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.447 qpair failed and we were unable to recover it. 00:37:21.447 [2024-10-13 20:07:11.061471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.061506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.061666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.061698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.061854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.061927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.062063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.062100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.062269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.062306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.062437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.062469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.062576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.062608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.062735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.062771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.062905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.062941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.063082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.063115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.063222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.063254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.063411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.063445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.063552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.063585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.063720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.063753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.063929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.063966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.064087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.064122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.064292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.064353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.064529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.064566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.064744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.064783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.065030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.065088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.065201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.065238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.065366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.065430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.065562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.065597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.065741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.065779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.065946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.065984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.066159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.066202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.066340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.066378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.066501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.066536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.066673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.066726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.066903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.066937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.067078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.067114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.067297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.067335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.067533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.067569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.067675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.067707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.067861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.067895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.068074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.068147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.068292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.068329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.068494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.068539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.068691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.068729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.068895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.448 [2024-10-13 20:07:11.068932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.448 qpair failed and we were unable to recover it. 00:37:21.448 [2024-10-13 20:07:11.069073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.069110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.069234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.069272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.069369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.069412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.069551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.069585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.069726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.069760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.069893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.069931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.070086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.070121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.070305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.070356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.070513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.070548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.070689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.070730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.070845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.070877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.071010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.071052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.071175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.071212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.071379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.071443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.071550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.071583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.071706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.071744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.071919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.071960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.072083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.072114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.072273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.072330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.072479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.072512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.072644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.072677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.072782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.072824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.072897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:21.449 [2024-10-13 20:07:11.073099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.073155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.073337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.073379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.073514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.073549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.073686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.073720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.073927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.073968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.074157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.074211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.074388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.074435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.074607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.074661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.074832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.074869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.075011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.075045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.075150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.075182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.075319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.075353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.075459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.075491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.075615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.075647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.075776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.075809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.075937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.075970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.076101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.076135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.449 [2024-10-13 20:07:11.076304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.449 [2024-10-13 20:07:11.076337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.449 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.076503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.076536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.076703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.076757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.076913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.076948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.077054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.077087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.077224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.077264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.077420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.077455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.077567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.077599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.077756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.077795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.077987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.078021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.078118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.078150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.078299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.078336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.078461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.078494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.078654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.078688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.078810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.078848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.079011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.079045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.079198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.079237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.079376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.079423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.079539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.079573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.079722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.079772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.079955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.079997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.080151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.080186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.080321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.080374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.080534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.080569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.080665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.080698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.080802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.080836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.081012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.081066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.081226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.081263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.081379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.081420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.081556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.081589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.081697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.081735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.081882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.081934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.082086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.082125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.082267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.082301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.082416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.082461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.082607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.082641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.082781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.082815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.082971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.083006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.083153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.083192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.083358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.083398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.083515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.450 [2024-10-13 20:07:11.083548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.450 qpair failed and we were unable to recover it. 00:37:21.450 [2024-10-13 20:07:11.083695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.083740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.083890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.083924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.084050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.084103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.084240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.084278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.084412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.084447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.084557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.084590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.084752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.084790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.084936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.084971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.085129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.085182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.085328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.085367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.085510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.085545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.085643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.085677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.085829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.085866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.086015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.086051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.086144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.086178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.086346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.086386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.086555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.086591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.086687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.086721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.086848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.086885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.087047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.087083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.087270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.087316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.087471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.087506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.087615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.087647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.087783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.087822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.087944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.087977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.088111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.088146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.088271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.088325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.088470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.088507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.088613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.088646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.088826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.088869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.089013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.089074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.089219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.089255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.089418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.089472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.089588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.089622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.089750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.089785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.451 qpair failed and we were unable to recover it. 00:37:21.451 [2024-10-13 20:07:11.089889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.451 [2024-10-13 20:07:11.089924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.090097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.090132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.090257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.090295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.090455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.090490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.090613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.090646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.090785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.090858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.090957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.090988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.091123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.091157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.091296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.091330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.091484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.091534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.091657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.091693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.091919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.091955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.092087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.092125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.092264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.092302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.092433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.092469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.092582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.092618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.092797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.092835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.092970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.093003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.093159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.093212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.093334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.093372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.093494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.093526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.093648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.093684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.093792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.093825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.093957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.093991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.094124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.094158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.094291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.094325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.094463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.094498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.094606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.094640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.094795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.094833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.095022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.095063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.095178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.095232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.095389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.095434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.095558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.095592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.095708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.095743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.095891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.095931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.096096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.096130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.096310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.096349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.096487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.096519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.096621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.096653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.096794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.096845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.452 qpair failed and we were unable to recover it. 00:37:21.452 [2024-10-13 20:07:11.097012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.452 [2024-10-13 20:07:11.097068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.097225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.097265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.097400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.097434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.097564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.097613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.097751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.097787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.097888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.097922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.098059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.098120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.098269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.098304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.098426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.098461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.098572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.098606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.098739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.098773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.098877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.098928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.099131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.099180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.099289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.099325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.099459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.099509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.099625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.099659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.099774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.099815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.099997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.100046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.100206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.100247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.100408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.100445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.100564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.100598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.100729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.100763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.100952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.100987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.101177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.101218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.101342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.101380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.101511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.101544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.101647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.101691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.101817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.101859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.102016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.102051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.102155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.102189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.102297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.102333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.102454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.102489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.102598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.102632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.102785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.102824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.102955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.102996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.103102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.103137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.103272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.103309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.103463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.103497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.103604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.103638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.103815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.103848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.453 [2024-10-13 20:07:11.103953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.453 [2024-10-13 20:07:11.103987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.453 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.104092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.104126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.104272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.104309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.104464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.104498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.104610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.104643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.104778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.104812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.104917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.104950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.105082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.105117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.105295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.105348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.105500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.105537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.105655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.105707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.105855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.105894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.106058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.106092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.106197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.106231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.106421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.106475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.106627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.106661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.106793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.106843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.106951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.106987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.107138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.107177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.107294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.107346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.107484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.107518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.107633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.107665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.107764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.107797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.107942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.107977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.108132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.108166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.108320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.108356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.108522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.108572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.108692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.108728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.108854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.108905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.109054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.109093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.109222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.109256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.109378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.109419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.109531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.109566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.109676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.109710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.109817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.109856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.109991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.110028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.110190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.110223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.110355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.110388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.110549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.110585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.110719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.110753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.110851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.454 [2024-10-13 20:07:11.110884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.454 qpair failed and we were unable to recover it. 00:37:21.454 [2024-10-13 20:07:11.111005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.111044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.111186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.111240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.111386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.111451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.111560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.111594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.111718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.111759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.111869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.111921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.112062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.112100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.112232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.112265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.112372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.112419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.112525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.112562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.112663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.112698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.112802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.112837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.113030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.113068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.113188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.113221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.113329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.113363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.113471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.113513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.113624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.113657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.113763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.113795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.113925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.455 [2024-10-13 20:07:11.113959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.455 qpair failed and we were unable to recover it. 00:37:21.455 [2024-10-13 20:07:11.114058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.114091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.114281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.114336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.114484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.114520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.114628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.114661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.114796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.114849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.114969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.115008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.115139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.115174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.115337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.115390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.115554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.115589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.115727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.115761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.115869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.115922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.116067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.116105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.116248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.116281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.116415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.116461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.116568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.116604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.116731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.116765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.116898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.116953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.117070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.117108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.117238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.117272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.117416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.117450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.117557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.117591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.117697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.117730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.117861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.117896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.118015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.118046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.118193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.118226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.118336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.118371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.118515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.118564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.118719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.118755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.118881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.118916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.119027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.119071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.119248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.119287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.119416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.119475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.119571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.119605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.119696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.119730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.119902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.119973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.120156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.120196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.120364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.120407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.120546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.120577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.120707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.120742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.120891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.456 [2024-10-13 20:07:11.120925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.456 qpair failed and we were unable to recover it. 00:37:21.456 [2024-10-13 20:07:11.121040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.121073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.121211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.121244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.121439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.121489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.121620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.121660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.121785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.121819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.121955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.121987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.122119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.122152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.122311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.122344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.122474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.122514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.122641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.122700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.122812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.122848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.122953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.122988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.123116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.123150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.123301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.123339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.123543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.123585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.123754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.123810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.123972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.124007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.124182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.124247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.124407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.124490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.124628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.124663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.124820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.124854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.125033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.125071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.125214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.125252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.125383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.125425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.125559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.125592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.125789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.125823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.125924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.125957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.126088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.126122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.126299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.126334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.126450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.126484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.126590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.126623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.126762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.126799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.126958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.126991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.127096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.127128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.127263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.127297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.127435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.127470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.127571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.127602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.127746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.127781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.127901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.127939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.128083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.128120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.457 qpair failed and we were unable to recover it. 00:37:21.457 [2024-10-13 20:07:11.128261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.457 [2024-10-13 20:07:11.128295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.128402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.128436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.128543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.128577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.128716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.128749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.128878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.128911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.129087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.129124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.129255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.129289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.129412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.129461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.129594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.129643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.129797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.129834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.129994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.130030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.130187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.130227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.130390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.130432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.130568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.130602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.130741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.130784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.130912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.130945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.131054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.131089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.131242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.131280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.131425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.131484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.131599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.131634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.131777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.131811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.131917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.131966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.132078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.132117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.132293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.132328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.132432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.132466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.132643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.132677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.132845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.132880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.133061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.133098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.133275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.133314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.133455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.133491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.133603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.133637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.133767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.133802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.133958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.133993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.134118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.134158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.134275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.134312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.134454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.134494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.134604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.134636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.134789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.134826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.134978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.135011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.135125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.135176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.458 qpair failed and we were unable to recover it. 00:37:21.458 [2024-10-13 20:07:11.135355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.458 [2024-10-13 20:07:11.135399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.135537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.135572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.135721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.135774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.135922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.135961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.136111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.136145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.136269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.136319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.136429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.136482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.136606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.136640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.136788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.136843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.136958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.136997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.137146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.137180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.137321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.137376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.137557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.137605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.137753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.137792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.137901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.137960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.138111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.138154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.138316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.138350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.138458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.138494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.138594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.138627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.138726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.138765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.138898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.138934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.139069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.139103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.139293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.139331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.139481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.139517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.139641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.139691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.139892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.139929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.140037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.140070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.140204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.140237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.140381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.140422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.140529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.140563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.140677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.140711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.140865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.140899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.141089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.141143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.141278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.141311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.141460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.141495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.141607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.141640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.141793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.141830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.459 [2024-10-13 20:07:11.141983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.459 [2024-10-13 20:07:11.142017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.459 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.142155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.142210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.142334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.142372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.142511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.142544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.142683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.142718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.142843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.142880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.143023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.143057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.143168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.143204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.143323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.143372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.143507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.143541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.143646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.143680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.143816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.143850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.143960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.143996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.144162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.144198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.144302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.144335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.144467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.144518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.144663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.144698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.144831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.144869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.145024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.145063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.145268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.145307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.145462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.145497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.145603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.145635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.145775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.145809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.145918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.145952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.146091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.146127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.146253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.146292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.146437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.146503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.146624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.146659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.146846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.146881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.146991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.147026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.147135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.147168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.147296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.147335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.147483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.147520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.147647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.147717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.147909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.147945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.148101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.460 [2024-10-13 20:07:11.148140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.460 qpair failed and we were unable to recover it. 00:37:21.460 [2024-10-13 20:07:11.148285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.148323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.148464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.148500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.148606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.148639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.148746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.148782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.148907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.148961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.149111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.149151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.149326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.149380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.149534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.149585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.149731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.149766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.149864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.149898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.150029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.150067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.150244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.150282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.150413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.150463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.150578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.150612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.150730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.150779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.150942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.150998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.151144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.151199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.151356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.151390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.151517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.151553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.151716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.151770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.152004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.152044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.152191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.152235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.152360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.152403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.152527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.152560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.152658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.152709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.152820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.152859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.153015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.153072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.153214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.153251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.153384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.153426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.153530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.153565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.153670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.153724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.153895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.153932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.154055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.154093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.154197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.154234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.154358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.154403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.154516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.154551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.154665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.154698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.154828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.154866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.154967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.155004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.155120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.461 [2024-10-13 20:07:11.155157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.461 qpair failed and we were unable to recover it. 00:37:21.461 [2024-10-13 20:07:11.155270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.155307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.155454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.155489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.155601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.155634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.155774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.155808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.155940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.155974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.156101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.156139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.156266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.156319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.156452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.156487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.156620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.156669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.156863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.156900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.157054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.157092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.157204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.157242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.157388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.157432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.157545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.157579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.157700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.157740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.157877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.157920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.158041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.158078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.158203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.158254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.158425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.158476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.158598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.158658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.158801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.158843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.159018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.159061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.159172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.159210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.159325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.159363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.159514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.159565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.159711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.159749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.159884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.159939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.160105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.160163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.160276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.160320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.160445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.160480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.160578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.160612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.160791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.160851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.161021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.161084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.161262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.161299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.161452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.161488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.161623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.161671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.161859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.161910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.162059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.162126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.162264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.162296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.162428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.162468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.462 qpair failed and we were unable to recover it. 00:37:21.462 [2024-10-13 20:07:11.162578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.462 [2024-10-13 20:07:11.162620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.162756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.162793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.162933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.162978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.163097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.163133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.163268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.163318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.163460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.163510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.163621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.163657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.163846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.163884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.164072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.164128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.164247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.164283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.164407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.164459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.164570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.164620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.164826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.164884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.164989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.165041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.165169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.165235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.165355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.165389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.165555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.165609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.165757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.165822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.165975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.166030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.166217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.166254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.166384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.166448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.166570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.166620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.166868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.166927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.167091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.167150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.167272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.167311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.167476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.167526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.167662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.167730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.167908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.167964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.168123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.168180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.168307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.168339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.168466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.168501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.168610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.168643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.168771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.168805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.168958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.168995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.169102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.169168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.169310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.169348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.169466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.169501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.169598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.169631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.169845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.169879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.170025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.170062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.463 [2024-10-13 20:07:11.170164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.463 [2024-10-13 20:07:11.170202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.463 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.170328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.170363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.170486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.170522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.170634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.170683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.170821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.170858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.170978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.171016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.171159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.171198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.171339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.171388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.171517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.171559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.171654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.171690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.171846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.171901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.172057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.172110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.172225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.172260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.172370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.172413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.172557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.172594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.172770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.172807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.172987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.173024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.173159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.173196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.173314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.173351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.173496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.173534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.173664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.173702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.173903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.173954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.174111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.174167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.174264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.174298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.174408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.174443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.174561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.174600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.174712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.174750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.174897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.174934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.175058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.175115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.175235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.175272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.175456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.175508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.175650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.175692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.175861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.175899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.176011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.176048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.176229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.176285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.176436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.464 [2024-10-13 20:07:11.176485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.464 qpair failed and we were unable to recover it. 00:37:21.464 [2024-10-13 20:07:11.176612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.176649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.176787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.176826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.176957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.177010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.177150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.177189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.177304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.177342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.177489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.177538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.177713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.177756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.177903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.177942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.178066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.178102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.178255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.178291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.178447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.178498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.178609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.178644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.178823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.178867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.179011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.179049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.179158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.179196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.179357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.179403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.179528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.179565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.179725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.179779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.179932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.179983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.180101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.180154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.180293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.180328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.180457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.180507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.180624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.180661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.180797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.180833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.180965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.181000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.181109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.181162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.181343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.181382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.181515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.181551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.181740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.181794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.181948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.182008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.182169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.182226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.182411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.182445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.182572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.182621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.182779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.182819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.182991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.183050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.183203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.183238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.183391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.183456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.183567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.183601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.183731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.183781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.183942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.465 [2024-10-13 20:07:11.183981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.465 qpair failed and we were unable to recover it. 00:37:21.465 [2024-10-13 20:07:11.184182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.184220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.184361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.184409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.184522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.184556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.184655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.184688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.184820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.184871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.185013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.185050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.185175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.185227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.185422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.185476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.185592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.185626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.185789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.185822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.185977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.186014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.186137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.186174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.186352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.186391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.186609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.186643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.186799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.186836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.187036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.187073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.187188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.187235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.187417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.187467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.187572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.187606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.187736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.187769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.187945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.187983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.188115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.188167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.188348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.188409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.188535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.188571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.188704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.188741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.188901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.188960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.189133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.189189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.189388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.189456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.189580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.189618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.189732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.189769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.189898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.189951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.190209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.190268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.190389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.190462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.190579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.190613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.190813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.190868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.190985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.191024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.191197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.191233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.191390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.466 [2024-10-13 20:07:11.191444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.466 qpair failed and we were unable to recover it. 00:37:21.466 [2024-10-13 20:07:11.191575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.191609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.191808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.191861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.192055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.192112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.192271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.192305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.192434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.192469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.192597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.192651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.192836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.192888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.193052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.193104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.193264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.193298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.193451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.193487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.193615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.193664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.193832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.193868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.194000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.194035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.194142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.194178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.194301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.194340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.194465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.194515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.194643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.194709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.194904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.194940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.195124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.195158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.195270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.195305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.195443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.195491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.195605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.195638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.195778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.195809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.195947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.195980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.196115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.196155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.196264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.196295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.196423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.196467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.467 [2024-10-13 20:07:11.196576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.467 [2024-10-13 20:07:11.196608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.467 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.196731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.196782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.197659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.197717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.197899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.197946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.198126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.198164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.198331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.198365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.198519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.198552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.198673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.198736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.198877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.198913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.199062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.199098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.199228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.199277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.199443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.199478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.199608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.199642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.199832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.199884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.200054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.200099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.200218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.200257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.200407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.200473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.200585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.200619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.200743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.200785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.200888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.200921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.201052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.201089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.201259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.201296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.201415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.201473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.201594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.201641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.201801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.201839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.201952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.201986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.202090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.202144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.202322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.202366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.202528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.202577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.202750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.202807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.468 qpair failed and we were unable to recover it. 00:37:21.468 [2024-10-13 20:07:11.202949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.468 [2024-10-13 20:07:11.203014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.203164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.203207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.203349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.203386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.203516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.203551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.203662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.203711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.203817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.203865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.204000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.204051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.204191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.204230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.204382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.204443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.204547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.204580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.204691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.204723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.204915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.204952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.205149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.205187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.205301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.205336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.205500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.205533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.205637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.205670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.205839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.205873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.205998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.206034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.206205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.206242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.206354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.206390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.206542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.469 [2024-10-13 20:07:11.206574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.469 qpair failed and we were unable to recover it. 00:37:21.469 [2024-10-13 20:07:11.206671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.206705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.206796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.206841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.207020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.207075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.207306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.207346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.207504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.207540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.207638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.207700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.207886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.207922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.208086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.208152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.208301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.208337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.208499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.208549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.208665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.208701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.208850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.208903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.209132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.209198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.209309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.209344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.209464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.209499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.209604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.209667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.209805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.209848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.210004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.210061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.210213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.210277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.210413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.210467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.470 [2024-10-13 20:07:11.210594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.470 [2024-10-13 20:07:11.210661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.470 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.210846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.210910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.211017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.211051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.211153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.211187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.211345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.211381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.211536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.211585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.211706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.211741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.211915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.211974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.212129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.212187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.212299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.212350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.212477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.212512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.212637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.212686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.212855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.212896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.213060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.213118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.213238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.213287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.213427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.213470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.213582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.213616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.213740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.213774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.213871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.213914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.214033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.214071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.214247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.214305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.214438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.214488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.214601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.214636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.214791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.214854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.214987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.471 [2024-10-13 20:07:11.215025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.471 qpair failed and we were unable to recover it. 00:37:21.471 [2024-10-13 20:07:11.215163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.215208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.215375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.215440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.215557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.215591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.215719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.215787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.215945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.216005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.216203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.216242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.216383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.216477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.216590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.216635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.216752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.216785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.216919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.216971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.217125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.217160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.217306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.217346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.217464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.217500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.217626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.217675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.217792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.217829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.217945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.217981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.218094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.218127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.218301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.218368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.218500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.218535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.218664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.218732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.218899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.218958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.219082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.219137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.219306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.219344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.219485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.219519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.219620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.219653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.219775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.219810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.220018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.220063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.220218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.220256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.754 [2024-10-13 20:07:11.220438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.754 [2024-10-13 20:07:11.220476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.754 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.220583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.220619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.220784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.220819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.220923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.220956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.221137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.221201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.221362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.221402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.221526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.221558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.221686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.221724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.221947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.221986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.222191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.222260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.222390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.222460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.222569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.222601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.222718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.222751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.222915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.222950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.223166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.223208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.223370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.223426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.223573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.223622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.223786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.223825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.223941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.223974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.224142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.224210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.224371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.224421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.224535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.224567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.224696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.224733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.224878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.224920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.225094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.225132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.225277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.225317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.225465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.225500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.225607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.225642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.225800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.225838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.226053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.226112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.226275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.226312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.226442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.226495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.226626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.226675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.226811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.226848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.227007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.227066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.227219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.227291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.227459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.227510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.227641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.227680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.227858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.227927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.228184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.228241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.228405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.755 [2024-10-13 20:07:11.228453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.755 qpair failed and we were unable to recover it. 00:37:21.755 [2024-10-13 20:07:11.228564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.228599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.228731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.228781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.228919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.228972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.229136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.229197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.229310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.229347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.229500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.229551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.229693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.229731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.229874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.229937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.230044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.230079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.230274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.230312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.230469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.230506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.230622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.230665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.230887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.230946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.231193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.231250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.231386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.231450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.231584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.231633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.231839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.231918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.232129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.232190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.232353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.232401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.232541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.232574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.232769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.232831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.233015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.233080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.233288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.233352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.233489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.233524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.233670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.233704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.233846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.233888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.234047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.234105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.234290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.234327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.234475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.234508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.234610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.234642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.234745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.234787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.234937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.234976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.235112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.235150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.235305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.235345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.235515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.235550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.235731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.235799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.235971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.236024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.236172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.236239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.236413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.236464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.236613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.756 [2024-10-13 20:07:11.236666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.756 qpair failed and we were unable to recover it. 00:37:21.756 [2024-10-13 20:07:11.236845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.236882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.237069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.237126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.237238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.237274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.237427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.237479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.237610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.237657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.237809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.237846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.238000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.238038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.238163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.238198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.238300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.238333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.238498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.238564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.238769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.238808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.239017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.239078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.239226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.239263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.239414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.239465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.239596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.239629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.239801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.239860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.240008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.240069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.240226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.240276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.240433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.240466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.240576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.240608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.240771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.240809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.241012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.241050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.241198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.241245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.241356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.241400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.241564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.241598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.241791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.241846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.242004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.242045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.242229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.242278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.242405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.242458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.242595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.242639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.242786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.242824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.243027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.243065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.243182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.243221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.243440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.243490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.243631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.243667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.243856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.243894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.244063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.244123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.244347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.244385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.244541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.244575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.757 [2024-10-13 20:07:11.244707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.757 [2024-10-13 20:07:11.244739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.757 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.244920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.244976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.245127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.245164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.245307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.245344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.245530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.245580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.245741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.245807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.246006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.246075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.246209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.246251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.246390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.246441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.246649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.246705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.246885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.246947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.247083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.247124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.247294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.247339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.247472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.247507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.247644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.247695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.247821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.247875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.248031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.248069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.248204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.248248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.248411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.248453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.248590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.248623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.248776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.248829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.248960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.249020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.249155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.249193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.249376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.249457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.249589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.249641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.249798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.249847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.250011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.250076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.250335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.250401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.250548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.250584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.250748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.250790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.250964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.251027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.251233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.251332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.251472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.251508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.251672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.758 [2024-10-13 20:07:11.251763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.758 qpair failed and we were unable to recover it. 00:37:21.758 [2024-10-13 20:07:11.252048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.252107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.252315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.252372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.252508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.252543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.252734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.252785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.253033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.253088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.253323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.253371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.253540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.253575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.253692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.253726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.253871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.253942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.254136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.254192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.254366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.254408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.254548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.254597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.254802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.254862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.255080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.255141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.255324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.255362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.255512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.255550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.255678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.255737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.255913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.255993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.256167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.256234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.256436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.256487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.256601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.256632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.256757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.256789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.256944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.256982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.257154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.257193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.257334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.257390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.257590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.257639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.257867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.257916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.258097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.258164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.258294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.258342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.258540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.258595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.258726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.258774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.258913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.258974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.259170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.259229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.259375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.259415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.259583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.259618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.259757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.259811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.259967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.260011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.260155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.260187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.260334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.260368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.260533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.759 [2024-10-13 20:07:11.260569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.759 qpair failed and we were unable to recover it. 00:37:21.759 [2024-10-13 20:07:11.260739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.260773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.260868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.260901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.261000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.261033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.261163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.261195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.261353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.261409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.261547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.261589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.261713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.261751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.261935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.261973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.262123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.262161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.262307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.262346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.262483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.262518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.262703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.262760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.262953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.263005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.263163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.263223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.263382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.263429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.263565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.263598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.263740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.263804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.263945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.263981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.264114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.264148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.264308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.264342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.264492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.264541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.264735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.264790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.265017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.265080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.265261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.265299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.265421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.265475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.265641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.265691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.265873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.265929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.266146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.266211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.266379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.266422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.266597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.266662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.266841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.266892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.267049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.267088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.267261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.267299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.267430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.267480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.267692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.267747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.267941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.267994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.268146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.268185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.268312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.268356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.268496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.268530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.268659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.268711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.760 qpair failed and we were unable to recover it. 00:37:21.760 [2024-10-13 20:07:11.268852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.760 [2024-10-13 20:07:11.268890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.269121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.269169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.269319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.269357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.269526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.269563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.269685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.269734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.269895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.269952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.270100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.270161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.270312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.270345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.270536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.270590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.270742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.270779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.270938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.271019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.271271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.271326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.271511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.271547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.271651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.271703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.271877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.271916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.272097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.272155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.272314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.272374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.272586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.272640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.272749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.272786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.272913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.272977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.273135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.273188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.273347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.273410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.273547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.273583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.273698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.273731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.273866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.273911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.274019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.274050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.274208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.274242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.274388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.274444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.274567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.274616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.274783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.274823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.275036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.275076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.275220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.275258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.275405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.275467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.275626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.275676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.275894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.275963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.276089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.276126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.276254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.276290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.276434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.276468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.276572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.276606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.276777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.276813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.761 [2024-10-13 20:07:11.276974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.761 [2024-10-13 20:07:11.277013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.761 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.277163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.277232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.277407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.277465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.277621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.277673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.277829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.277887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.278126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.278166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.278340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.278380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.278553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.278588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.278783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.278821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.278938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.278988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.279221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.279304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.279470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.279508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.279642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.279710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.279933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.279991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.280190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.280251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.280438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.280472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.280595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.280644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.280824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.280890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.281105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.281167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.281304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.281353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.281550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.281599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.281787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.281841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.281965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.282008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.282219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.282285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.282440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.282476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.282629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.282709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.282972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.283040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.283241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.283339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.283526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.283561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.283710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.283764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.284026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.284092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.284209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.284245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.284408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.284447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.284573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.284607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.284724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.284761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.284919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.762 [2024-10-13 20:07:11.284952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.762 qpair failed and we were unable to recover it. 00:37:21.762 [2024-10-13 20:07:11.285093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.285146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.285293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.285330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.285515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.285565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.285684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.285722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.285909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.285985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.286177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.286234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.286414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.286460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.286615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.286671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.286910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.286952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.287203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.287261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.287456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.287490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.287628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.287686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.287841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.287874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.287968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.288001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.288128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.288165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.288335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.288371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.288528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.288578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.288703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.288751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.288928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.288964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.289098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.289154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.289289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.289333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.289476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.289512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.289621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.289662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.289846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.289885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.290063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.290100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.290265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.290304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.290516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.290555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.290696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.290743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.290970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.291030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.291229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.291290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.291442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.291478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.291613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.291654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.291952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.292011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.292207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.292245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.292449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.292494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.292642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.292710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.292890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.292927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.293114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.293178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.293350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.293389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.293564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.763 [2024-10-13 20:07:11.293597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.763 qpair failed and we were unable to recover it. 00:37:21.763 [2024-10-13 20:07:11.293731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.293782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.293926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.293965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.294181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.294219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.294390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.294448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.294600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.294645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.294845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.294879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.295014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.295048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.295187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.295222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.295364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.295404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.295573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.295623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.295755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.295796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.295984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.296017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.296173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.296211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.296355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.296402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.296591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.296625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.296777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.296815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.296972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.297011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.297185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.297219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.297365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.297430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.297572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.297620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.297762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.297804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.297911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.297946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.298078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.298111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.298216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.298249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.298385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.298426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.298571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.298608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.298771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.298821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.298977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.299032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.299188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.299284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.299410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.299446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.299557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.299593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.299752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.299786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.299968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.300030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.300177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.300215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.300373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.300419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.300565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.300599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.300712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.300762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.301045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.301101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.301210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.301248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.301404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.764 [2024-10-13 20:07:11.301439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.764 qpair failed and we were unable to recover it. 00:37:21.764 [2024-10-13 20:07:11.301540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.301574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.301737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.301788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.301977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.302043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.302228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.302267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.302419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.302479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.302609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.302642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.302745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.302797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.302949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.302986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.303167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.303204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.303353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.303390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.303572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.303622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.303804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.303872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.304034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.304090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.304225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.304261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.304400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.304436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.304604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.304660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.304823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.304862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.305069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.305142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.305290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.305324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.305426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.305461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.305594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.305633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.305793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.305843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.305991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.306028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.306186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.306223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.306352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.306386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.306578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.306629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.306845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.306886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.307089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.307128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.307287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.307327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.307501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.307550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.307722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.307757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.307967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.308036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.308306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.308363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.308510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.308545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.308733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.308805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.308979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.309037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.309220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.309258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.309378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.309452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.309603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.309662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.309816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.765 [2024-10-13 20:07:11.309860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.765 qpair failed and we were unable to recover it. 00:37:21.765 [2024-10-13 20:07:11.310059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.310129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.310251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.310291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.310416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.310450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.310586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.310620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.310774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.310811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.310928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.310979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.311114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.311151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.311326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.311380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.311555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.311604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.311772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.311808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.311937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.311971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.312126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.312160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.312325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.312360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.312526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.312562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.312708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.312746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.312963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.312997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.313188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.313226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.313368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.313418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.313568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.313602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.313787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.313825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.313974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.314016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.314213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.314251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.314386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.314462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.314623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.314689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.314845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.314881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.315009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.315063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.315324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.315384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.315573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.315607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.315885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.315940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.316212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.316271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.316426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.316462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.316598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.316632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.316812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.316849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.317047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.317084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.317244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.317285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.317448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.317483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.317592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.317627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.317800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.317838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.318084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.318141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.318344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.318404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.766 qpair failed and we were unable to recover it. 00:37:21.766 [2024-10-13 20:07:11.318576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.766 [2024-10-13 20:07:11.318611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.318796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.318833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.319060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.319121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.319281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.319319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.319520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.319555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.319678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.319712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.319852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.319888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.320135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.320191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.320375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.320419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.320553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.320604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.320762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.320826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.321034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.321133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.321254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.321293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.321452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.321486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.321641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.321675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.321776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.321809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.321952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.322049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.322191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.322243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.322388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.322442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.322639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.322711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.322876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.322919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.323034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.323069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.323229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.323264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.323437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.323474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.323587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.323621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.323715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.323750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.323880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.323914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.324052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.324087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.324243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.324283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.324426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.324461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.324611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.324660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.324803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.324842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.325049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.325088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.325257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.325295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.325413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.325467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.325608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.767 [2024-10-13 20:07:11.325643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.767 qpair failed and we were unable to recover it. 00:37:21.767 [2024-10-13 20:07:11.325792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.325829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.326011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.326072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.326251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.326286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.326412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.326466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.326611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.326662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.326797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.326831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.326940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.326990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.327155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.327198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.327364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.327405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.327535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.327570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.327698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.327732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.327888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.327938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.328111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.328165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.328285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.328335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.328497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.328531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.328648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.328683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.328812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.328866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.329005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.329043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.329181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.329236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.329420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.329492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.329633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.329669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.329817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.329852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.330006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.330058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.330197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.330235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.330435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.330491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.330622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.330658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.330852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.330890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.331021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.331072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.331241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.331278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.331407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.331463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.331618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.331666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.331854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.331907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.332051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.332089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.332204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.332242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.332389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.332441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.332582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.332616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.332763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.332830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.332966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.333021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.333195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.333253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.333377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.768 [2024-10-13 20:07:11.333455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.768 qpair failed and we were unable to recover it. 00:37:21.768 [2024-10-13 20:07:11.333603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.333653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.333840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.333899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.334030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.334089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.334203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.334242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.334424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.334458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.334561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.334595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.334757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.334795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.334988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.335046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.335196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.335233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.335348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.335386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.335537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.335572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.335689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.335737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.335909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.335968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.336178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.336235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.336369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.336418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.336616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.336651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.336824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.336881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.337048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.337104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.337254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.337291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.337487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.337538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.337712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.337766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.337949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.337990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.338159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.338198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.338353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.338387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.338522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.338577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.338748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.338801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.339059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.339097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.339212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.339261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.339373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.339427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.339564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.339601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.339754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.339791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.339924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.339961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.340162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.340196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.340349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.340386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.340555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.340606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.340769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.340811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.341002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.341056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.341202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.341257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.341383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.341452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.341576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.769 [2024-10-13 20:07:11.341612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.769 qpair failed and we were unable to recover it. 00:37:21.769 [2024-10-13 20:07:11.341757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.341812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.341954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.342016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.342138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.342193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.342361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.342406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.342535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.342569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.342696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.342730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.342834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.342883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.343022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.343059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.343236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.343273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.343419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.343459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.343565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.343599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.343781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.343831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.343988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.344043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.344162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.344217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.344354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.344388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.344582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.344618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.344737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.344781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.344941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.344975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.345107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.345141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.345267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.345302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.345444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.345480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.345637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.345682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.345797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.345832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.345968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.346002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.346124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.346163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.346289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.346323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.346462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.346496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.346630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.346673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.346800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.346834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.346952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.346990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.347124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.347162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.347319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.347385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.347518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.347555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.347734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.347790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.347953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.348004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.348171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.348228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.348387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.348458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.348590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.348654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.348837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.348889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.349007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.349060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.770 qpair failed and we were unable to recover it. 00:37:21.770 [2024-10-13 20:07:11.349172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.770 [2024-10-13 20:07:11.349208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.349334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.349368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.349509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.349559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.349704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.349740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.349870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.349904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.350039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.350073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.350199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.350233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.350339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.350374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.350478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.350514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.350612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.350646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.350805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.350839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.350992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.351046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.351205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.351239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.351380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.351437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.351542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.351577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.351699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.351735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.351895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.351929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.352089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.352129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.352277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.352315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.352470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.352507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.352628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.352666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.352782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.352819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.352986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.353023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.353188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.353242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.353372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.353419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.353579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.353633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.353819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.353875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.354078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.354135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.354293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.354327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.354462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.354496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.354593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.354627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.354798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.354836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.355024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.355079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.355194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.355232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.355380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.355448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.771 [2024-10-13 20:07:11.355598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.771 [2024-10-13 20:07:11.355636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.771 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.355805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.355858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.356009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.356062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.356216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.356273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.356445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.356480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.356588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.356622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.356748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.356782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.356917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.356951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.357103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.357138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.357315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.357363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.357526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.357576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.357721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.357757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.357862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.357898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.358004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.358041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.358255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.358294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.358470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.358519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.358633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.358692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.358842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.358881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.359050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.359088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.359230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.359268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.359378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.359426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.359621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.359670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.359906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.359979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.360201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.360261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.360435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.360472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.360611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.360644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.360797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.360835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.360939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.360977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.361188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.361247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.361363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.361417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.361598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.361631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.361818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.361854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.361999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.362036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.362180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.362218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.362390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.362446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.362607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.362656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.362851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.362893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.363056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.363123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.363350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.363388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.363556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.363591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.772 [2024-10-13 20:07:11.363749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.772 [2024-10-13 20:07:11.363786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.772 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.363907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.363958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.364098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.364136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.364308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.364362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.364558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.364607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.364863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.364944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.365199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.365259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.365409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.365465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.365579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.365615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.365722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.365757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.366029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.366089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.366256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.366297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.366496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.366532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.366663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.366712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.366924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.366995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.367097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.367132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.367274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.367321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.367453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.367489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.367603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.367647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.367870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.367908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.368051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.368089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.368285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.368340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.368530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.368580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.368768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.368817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.368951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.368992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.369186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.369246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.369383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.369437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.369595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.369629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.369807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.369861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.370154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.370217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.370362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.370410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.370570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.370604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.370753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.370788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.370918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.370952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.371095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.371129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.371307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.371344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.371507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.371541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.371650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.371703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.371852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.371886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.372022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.372056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.372212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.773 [2024-10-13 20:07:11.372263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.773 qpair failed and we were unable to recover it. 00:37:21.773 [2024-10-13 20:07:11.372415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.372450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.372579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.372629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.372847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.372919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.373177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.373238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.373386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.373432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.373587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.373622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.373741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.373775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.373876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.373929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.374047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.374085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.374255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.374293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.374428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.374494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.374650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.374721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.374932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.374973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.375119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.375158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.375299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.375337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.375523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.375578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.375730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.375767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.375996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.376070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.376331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.376391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.376566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.376602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.376798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.376853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.376964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.377001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.377158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.377213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.377383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.377428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.377564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.377598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.377730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.377765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.377919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.377973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.378115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.378150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.378299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.378348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.378534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.378583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.378748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.378789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.378983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.379045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.379284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.379322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.379480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.379515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.379647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.379704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.379894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.379949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.380096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.380149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.380287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.380322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.380473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.380523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.380663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.774 [2024-10-13 20:07:11.380714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.774 qpair failed and we were unable to recover it. 00:37:21.774 [2024-10-13 20:07:11.380854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.380890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.381097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.381169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.381318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.381355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.381523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.381558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.381717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.381758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.381970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.382043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.382298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.382357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.382500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.382537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.382680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.382714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.382820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.382885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.383051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.383094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.383232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.383284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.383433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.383483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.383584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.383618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.383723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.383756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.383885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.383924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.384114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.384152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.384306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.384340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.384476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.384511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.384639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.384674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.384830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.384864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.385005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.385042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.385211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.385248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.385411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.385445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.385580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.385614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.385758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.385795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.385914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.385966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.386137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.386175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.386317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.386355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.386489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.386523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.386648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.386700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.386871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.386909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.387070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.387108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.387274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.387312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.387451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.387485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.775 [2024-10-13 20:07:11.387588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.775 [2024-10-13 20:07:11.387623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.775 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.387751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.387801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.388001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.388039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.388182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.388220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.388362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.388407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.388575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.388624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.388793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.388830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.388997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.389044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.389187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.389226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.389342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.389377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.389546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.389581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.389720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.389758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.389962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.390000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.390101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.390139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.390256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.390306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.390471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.390506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.390625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.390660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.390857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.390906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.391157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.391216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.391352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.391387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.391511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.391553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.391690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.391724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.391877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.391915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.392069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.392128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.392270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.392318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.392491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.392535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.392701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.392740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.392872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.392924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.393098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.393136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.393270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.393307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.393494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.393529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.393680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.393730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.393846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.393883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.394109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.394163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.394314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.394352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.394499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.394533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.394658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.394708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.394959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.395020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.395136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.395174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.395354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.395392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.776 [2024-10-13 20:07:11.395530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.776 [2024-10-13 20:07:11.395564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.776 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.395698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.395760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.395912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.395957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.396174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.396232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.396377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.396425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.396554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.396590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.396740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.396789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.396949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.397005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.397132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.397171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.397312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.397347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.397462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.397498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.397628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.397681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.397841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.397876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.398033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.398067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.398172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.398207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.398334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.398369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.398533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.398602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.398754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.398795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.398957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.399012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.399120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.399185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.399333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.399376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.399527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.399562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.399700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.399755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.399907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.399948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.400117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.400156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.400301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.400339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.400509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.400544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.400669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.400706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.400883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.400920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.401067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.401105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.401285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.401336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.401465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.401502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.401671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.401745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.401916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.401975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.402202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.402260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.402426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.402482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.402607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.402643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.402752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.402786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.402910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.402969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.403175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.403231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.403434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.777 [2024-10-13 20:07:11.403469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.777 qpair failed and we were unable to recover it. 00:37:21.777 [2024-10-13 20:07:11.403625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.403659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.403847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.403884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.404124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.404186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.404357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.404403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.404647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.404716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.404872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.404908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.405125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.405181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.405353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.405390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.405520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.405554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.405716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.405768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.405936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.405997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.406130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.406186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.406351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.406388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.406523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.406558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.406716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.406750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.406897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.406935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.407093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.407152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.407351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.407389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.407550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.407585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.407720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.407780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.407978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.408044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.408182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.408227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.408371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.408424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.408543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.408576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.408712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.408763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.408911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.408949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.409085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.409137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.409307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.409344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.409506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.409541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.409679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.409713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.409867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.409906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.410053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.410090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.410288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.410326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.410499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.410534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.410669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.410724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.410913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.410951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.411094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.411131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.411281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.411318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.411490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.411552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.778 [2024-10-13 20:07:11.411729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.778 [2024-10-13 20:07:11.411767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.778 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.411928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.411982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.412160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.412215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.412324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.412360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.412504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.412540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.412722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.412791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.412960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.412998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.413190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.413247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.413389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.413451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.413586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.413629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.413819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.413857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.414061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.414099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.414275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.414312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.414449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.414503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.414677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.414736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.414931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.414986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.415145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.415212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.415366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.415406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.415514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.415548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.415682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.415716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.415869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.415908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.416060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.416093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.416217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.416251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.416351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.416388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.416546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.416579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.416710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.416742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.416869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.416901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.417042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.417077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.417203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.417236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.417429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.417465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.417607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.417663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.417815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.417865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.418052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.418103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.418236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.418271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.418385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.418433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.418585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.418637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.418800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.418852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.419003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.419058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.419167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.419202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.419307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.419339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.419498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.779 [2024-10-13 20:07:11.419548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.779 qpair failed and we were unable to recover it. 00:37:21.779 [2024-10-13 20:07:11.419768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.419829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.420102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.420160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.420266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.420316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.420483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.420517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.420645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.420679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.420816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.420850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.421081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.421136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.421281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.421317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.421463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.421497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.421621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.421676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.421817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.421854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.422002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.422038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.422174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.422212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.422356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.422402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.422546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.422596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.422750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.422817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.422949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.423005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.423130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.423183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.423345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.423379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.423527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.423567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.423749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.423783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.423918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.423952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.424108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.424142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.424365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.424403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.424520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.424553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.424688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.424722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.424947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.424984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.425139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.425196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.425344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.425381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.425507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.425540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.425696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.425747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.425894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.425932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.426203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.426241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.426387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.426435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.426563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.426597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.780 [2024-10-13 20:07:11.426762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.780 [2024-10-13 20:07:11.426795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.780 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.426966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.427027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.427167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.427204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.427361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.427411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.427569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.427617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.427781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.427832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.428040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.428081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.428226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.428263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.428435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.428470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.428593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.428630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.428801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.428838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.428982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.429017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.429158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.429195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.429336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.429373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.429528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.429561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.429758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.429826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.429941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.429978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.430132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.430186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.430330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.430365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.430554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.430609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.430757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.430809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.430951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.431003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.431166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.431201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.431298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.431332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.431486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.431543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.431675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.431709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.431849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.431883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.432023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.432058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.432183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.432216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.432331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.432366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.432506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.432545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.432722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.432756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.432860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.432894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.433028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.433062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.433197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.433230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.433362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.433413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.433573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.433609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.433790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.433855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.434071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.434126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.434235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.434269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.781 [2024-10-13 20:07:11.434405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-10-13 20:07:11.434440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.781 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.434597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.434649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.434807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.434841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.434987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.435024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.435176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.435210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.435344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.435379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.435570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.435619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.435762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.435799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.436024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.436063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.436322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.436383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.436521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.436564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.436758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.436810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.437037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.437100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.437261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.437301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.437470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.437504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.437656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.437704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.437852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.437890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.438007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.438044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.438205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.438263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.438423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.438466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.438659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.438699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.438811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.438850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.438995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.439032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.439176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.439214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.439356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.439406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.439573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.439607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.439736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.439774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.439919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.439958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.440082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.440119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.440267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.440302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.440456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.440505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.440685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.440725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.440871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.440908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.441092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.441129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.441272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.441310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.441433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.441485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.441609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.441649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.441750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.441784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.441949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.441987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.442159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.442196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.782 [2024-10-13 20:07:11.442319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-10-13 20:07:11.442357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.782 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.442521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.442557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.442663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.442696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.442830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.442880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.443021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.443059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.443237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.443274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.443415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.443476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.443633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.443671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.443811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.443844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.443943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.443977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.444163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.444202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.444360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.444400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.444550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.444583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.444750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.444815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.444979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.445024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.445196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.445235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.445420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.445459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.445615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.445665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.445838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.445892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.445989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.446022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.446121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.446154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.446322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.446355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.446502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.446535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.446679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.446712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.446845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.446894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.447124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.447180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.447304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.447336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.447499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.447551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.447706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.447748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.447956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.448013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.448130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.448166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.448321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.448354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.448493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.448542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.448661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.448698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.448868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.448910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.449119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.449176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.449295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.449329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.449442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.449475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.449628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.449685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.449869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.449910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.450137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-10-13 20:07:11.450218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.783 qpair failed and we were unable to recover it. 00:37:21.783 [2024-10-13 20:07:11.450390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.450467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.450565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.450606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.450749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.450817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.451004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.451072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.451225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.451275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.451386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.451436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.451547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.451583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.451756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.451808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.451916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.451950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.452084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.452118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.452269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.452301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.452446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.452481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.452657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.452706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.452848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.452886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.452994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.453047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.453197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.453235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.453409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.453461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.453597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.453631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.453768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.453820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.453995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.454033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.454208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.454261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.454401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.454445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.454590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.454647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.454782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.454821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.455014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.455074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.455238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.455272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.455440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.455494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.455673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.455728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.455913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.455955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.456160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.456217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.456336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.456374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.456547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.456600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.456836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.456894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.457066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.457126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.457271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.457309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.784 qpair failed and we were unable to recover it. 00:37:21.784 [2024-10-13 20:07:11.457472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.784 [2024-10-13 20:07:11.457508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.457632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.457691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.457814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.457866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.458060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.458104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.458252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.458286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.458391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.458442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.458600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.458658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.458822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.458893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.459064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.459101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.459258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.459291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.459390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.459433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.459593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.459627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.459784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.459834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.459961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.459998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.460166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.460202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.460374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.460434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.460560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.460608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.460762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.460799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.461001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.461058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.461348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.461415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.461544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.461578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.461759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.461830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.462087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.462149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.462277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.462315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.462475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.462508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.462677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.462712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.462856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.462892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.463057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.463131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.463327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.463370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.463546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.463580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.463727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.463763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.463893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.463948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.464152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.464190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.464358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.464404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.464580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.464629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.464780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.464817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.465000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.465054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.465182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.465235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.465381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.465424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.465559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.465611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.785 [2024-10-13 20:07:11.465779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.785 [2024-10-13 20:07:11.465822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.785 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.465930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.465963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.466110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.466145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.466300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.466333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.466470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.466503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.466633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.466665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.466818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.466871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.467026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.467077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.467292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.467345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.467470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.467505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.467631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.467702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.467846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.467895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.468048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.468122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.468263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.468301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.468452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.468501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.468611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.468658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.468794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.468829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.468972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.469028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.469171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.469206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.469314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.469348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.469491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.469529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.469649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.469687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.469829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.469865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.470048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.470086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.470262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.470295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.470455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.470489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.470624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.470670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.470801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.470834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.470943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.470988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.471124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.471158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.471304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.471350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.471540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.471577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.471698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.471735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.471870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.471905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.472051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.472090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.472221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.472257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.472408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.472468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.472615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.472675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.472872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.472915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.473065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.473104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.473216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.786 [2024-10-13 20:07:11.473266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.786 qpair failed and we were unable to recover it. 00:37:21.786 [2024-10-13 20:07:11.473442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.473478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.473643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.473697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.473849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.473909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.474060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.474112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.474245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.474278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.474435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.474484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.474627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.474664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.474795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.474829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.474977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.475012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.475141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.475174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.475332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.475366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.475497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.475532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.475640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.475675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.475782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.475816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.475975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.476015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.476156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.476192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.476326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.476360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.476509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.476548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.476670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.476717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.476899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.476950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.477092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.477128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.477243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.477276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.477408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.477451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.477585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.477620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.477731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.477775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.477898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.477951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.478093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.478145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.478278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.478312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.478457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.478495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.478656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.478692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.478851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.478886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.479015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.479051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.479213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.479250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.479376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.479418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.479525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.479560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.479703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.479737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.479870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.479904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.480009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.480043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.480201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.480241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.480405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.787 [2024-10-13 20:07:11.480456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.787 qpair failed and we were unable to recover it. 00:37:21.787 [2024-10-13 20:07:11.480568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.480601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.480752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.480792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.481116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.481159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.481286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.481323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.481465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.481501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.481684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.481733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.481888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.481951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.482129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.482187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.482318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.482352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.482505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.482542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.482734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.482773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.482952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.482990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.483160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.483199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.483321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.483360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.483541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.483581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.483731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.483770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.483904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.483940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.484077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.484116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.484230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.484268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.484421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.484455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.484570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.484619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.484765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.484801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.485017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.485056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.485209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.485248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.485457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.485504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.485628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.485666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.485918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.485993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.486255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.486313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.486470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.486505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.486616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.486652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.486860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.486912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.487087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.487126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.487239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.487277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.487435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.487470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.487604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.487638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.487762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.788 [2024-10-13 20:07:11.487816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.788 qpair failed and we were unable to recover it. 00:37:21.788 [2024-10-13 20:07:11.487988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.488027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.488226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.488265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.488406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.488459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.488588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.488624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.488755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.488790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.488920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.488954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.489107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.489144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.489275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.489328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.489501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.489541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.489673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.489722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.489858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.489894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.490071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.490109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.490252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.490291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.490456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.490501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.490613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.490645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.490806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.490844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.491007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.491045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.491194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.491233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.491369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.491426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.491603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.491648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.491811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.491864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.492084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.492123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.492319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.492357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.492501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.492536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.492661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.492712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.492843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.492876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.493027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.493064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.493206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.493243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.493368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.493410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.493577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.493628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.493851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.493905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.494114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.494166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.494320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.494358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.494488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.494522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.494638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.494672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.494810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.494845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.495114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.495173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.495302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.495335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.495453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.495487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.495621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.789 [2024-10-13 20:07:11.495654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.789 qpair failed and we were unable to recover it. 00:37:21.789 [2024-10-13 20:07:11.495768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.495801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.495943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.495995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.496118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.496156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.496359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.496406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.496527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.496561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.496717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.496754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.496904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.496937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.497072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.497123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.497282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.497316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.497420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.497458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.497590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.497623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.497766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.497804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.497959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.497993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.498126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.498175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.498344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.498380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.498559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.498593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.498698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.498731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.498866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.498899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.499054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.499092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.499286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.499340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.499546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.499583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.499688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.499723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.499880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.499914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.500040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.500079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.500203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.500237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.500370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.500414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.500572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.500622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.500800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.500837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.500933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.500967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.501204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.501270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.501448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.501483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.501600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.501635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.501801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.501839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.501959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.501992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.502211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.502248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.502392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.502473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.502623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.502658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.502825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.502858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.503085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.503161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.503309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.790 [2024-10-13 20:07:11.503342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.790 qpair failed and we were unable to recover it. 00:37:21.790 [2024-10-13 20:07:11.503491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.503525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.503648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.503682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.503786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.503820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.503944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.503994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.504114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.504152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.504319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.504352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.504470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.504504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.504638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.504671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.504798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.504832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.504990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.505024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.505179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.505216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.505344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.505377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.505512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.505563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.505779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.505828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.505941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.505978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.506072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.506106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.506268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.506306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.506464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.506499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.506598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.506638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.506770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.506808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.506936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.506970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.507127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.507182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.507328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.507365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.507534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.507567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.507672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.507704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.507896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.507933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.508058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.508090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.508224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.508258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.508391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.508453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.508543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.508577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.508715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.508747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.508878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.508914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.509064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.509099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.509258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.509310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.509465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.509498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.509602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.509640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.509777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.509827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.509985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.510021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.510181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.510214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.510345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.510407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.791 qpair failed and we were unable to recover it. 00:37:21.791 [2024-10-13 20:07:11.510589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.791 [2024-10-13 20:07:11.510621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.510778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.510811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.510962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.510998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.511138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.511174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.511333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.511367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.511485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.511518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.511610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.511644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.511815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.511848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.511985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.512019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.512183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.512231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.512390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.512433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.512560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.512594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.512724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.512760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.512912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.512946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.513084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.513136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.513279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.513315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.513460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.513495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.513589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.513623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.513794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.513855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.514017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.514054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.514161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.514196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.514450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.514485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.514620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.514653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.514753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.514784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.514914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.514951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.515126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.515159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.515266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.515319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.515513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.515550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.515687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.515724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.515897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.515951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.516102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.516140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.516294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.516329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.516520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.516556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.516667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.516701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.516837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.516872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.516974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.517025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.517182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.517225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.517352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.517387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.517533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.517570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.517741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.517838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.792 [2024-10-13 20:07:11.517996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.792 [2024-10-13 20:07:11.518030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.792 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.518168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.518201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.518366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.518409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.518539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.518574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.518717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.518755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.518880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.518919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.519073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.519108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.519307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.519362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.519555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.519592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.519705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.519740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.519901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.519935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.520044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.520079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.520234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.520268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.520407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.520457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.520596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.520633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.520811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.520846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.520954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.521008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.521180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.521257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.521410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.521451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.521551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.521586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.521760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.521815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.521992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.522028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.522178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.522216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.522361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.522406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.522535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.522568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.522708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.522745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.522882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.522916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.523046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.523081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.523214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.523248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.523380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.523422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.523527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.523560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.523694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.523743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.524007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.524065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.524214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.524247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.524353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.524386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.524562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.793 [2024-10-13 20:07:11.524596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.793 qpair failed and we were unable to recover it. 00:37:21.793 [2024-10-13 20:07:11.524725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.524758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.524917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.524951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.525084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.525121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.525249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.525282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.525404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.525438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.525569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.525619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.525761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.525797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.525967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.526001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.526159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.526198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.526340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.526375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.526533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.526582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.526720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.526759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.526912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.526946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.527102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.527135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.527267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.527301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.527463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.527498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.527601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.527635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.527789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.527826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.527975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.528010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.528224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.528260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.528430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.528482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.528585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.528617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.528821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.528878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.529046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.529083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.529208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.529241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.529445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.529480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.529604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.529653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.529764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.529801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.529928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.529964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.530068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.530101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.530239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.530273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.530409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.530445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.530550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.530585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.530713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.530747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.530872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.530905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.531008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.531042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.531177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.531212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.531386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.531450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.531594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.531630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.531742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.794 [2024-10-13 20:07:11.531777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.794 qpair failed and we were unable to recover it. 00:37:21.794 [2024-10-13 20:07:11.531886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.531919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.532038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.532076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.532261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.532294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.532433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.532467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.532593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.532627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.532748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.532781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.532913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.532964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.533116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.533151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.533279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.533314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.533465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.533527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.533710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.533785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.533965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.534002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.534152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.534193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.534328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.534367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.534551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.534586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.534771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.534811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.534983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.535021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.535177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.535211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.535336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.535384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.535548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.535582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.535683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.535716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.535818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.535852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.536012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.536052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.536186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.536220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.536353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.536389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.536622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.536657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.536796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.536831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.536999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.537033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.537325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.537414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.537583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.537619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.537796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.537834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.538005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.538066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.538194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.538227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.538367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.538411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.538539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.538588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.538732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.538768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.538908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.538944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.539091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.539142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.539324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.539358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.539474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.539510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.539613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.539649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.539847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.795 [2024-10-13 20:07:11.539882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.795 qpair failed and we were unable to recover it. 00:37:21.795 [2024-10-13 20:07:11.540041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.540078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.540214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.540252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.540412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.540447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.540555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.540589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.540715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.540749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.540934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.540970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.541076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.541138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.541320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.541358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.541516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.541551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.541658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.541696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.541837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.541876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.541999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.542033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.542194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.542243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.542350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.542387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.542547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.542582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.542715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.542768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.542921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.542959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.543114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.543147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.543241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.543278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.543494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.543544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.543654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.543695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.543824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.543858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.543970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.544004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.544135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.544186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.544350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.544390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.544552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.544586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.544684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.544719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.544878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.544911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.545036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.545073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.545213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.545252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.545383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.545428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.545585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.545618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:21.796 [2024-10-13 20:07:11.545779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.796 [2024-10-13 20:07:11.545839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:21.796 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.545972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.546051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.546235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.546288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.546471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.546508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.546644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.546678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.546821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.546859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.546995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.547034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.547236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.547275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.547452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.547502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.547656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.547693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.547836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.547870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.547969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.548000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.548152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.548185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.548344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.548377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.548501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.548534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.548715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.548783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.548928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.548983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.549130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.549170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.549316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.549354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.549499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.549549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.549697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.549734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.549892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.549950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.550133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.550191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.550299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.550333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.550495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.550530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.550628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.550660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.550818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.550852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.550953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.550987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.551104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.551144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.551281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.551315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.551448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.551482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.551585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.551618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.551747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.551780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.551915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.551947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.552099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.552151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.552310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.552345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.552471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.552520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.552664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.552698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.552859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.552896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.082 qpair failed and we were unable to recover it. 00:37:22.082 [2024-10-13 20:07:11.553012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.082 [2024-10-13 20:07:11.553049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.553190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.553227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.553388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.553434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.553584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.553618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.553747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.553780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.553913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.553947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.554100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.554138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.554277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.554328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.554487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.554537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.554691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.554740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.554905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.554960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.555126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.555179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.555285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.555321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.555455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.555490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.555615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.555654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.555758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.555795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.555911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.555949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.556203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.556259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.556412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.556465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.556561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.556594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.556716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.556754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.556859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.556896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.557046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.557083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.557198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.557235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.557367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.557412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.557527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.557560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.557671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.557705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.557798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.557848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.557958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.557995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.558122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.558164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.558299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.558333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.558502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.558555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.558664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.558700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.558845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.558897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.559048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.559099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.559260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.559296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.559430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.559464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.559588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.559622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.559840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.559909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.560049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.083 [2024-10-13 20:07:11.560087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.083 qpair failed and we were unable to recover it. 00:37:22.083 [2024-10-13 20:07:11.560273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.560312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.560440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.560474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.560582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.560617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.560786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.560837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.560976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.561056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.561225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.561262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.561368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.561422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.561577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.561616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.561719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.561753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.561885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.561918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.562069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.562107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.562224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.562261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.562428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.562463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.562639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.562687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.562869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.562925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.563118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.563159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.563294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.563328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.563465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.563498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.563687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.563741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.563865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.563905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.564030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.564084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.564226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.564264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.564453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.564503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.564659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.564708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.564931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.564971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.565137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.565201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.565339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.565376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.565516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.565554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.565681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.565735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.565860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.565921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.566027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.566067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.566175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.566211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.566345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.566379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.566518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.566553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.566737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.566790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.566925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.566990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.567218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.567253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.567391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.567434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.567582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.567631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.567771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.084 [2024-10-13 20:07:11.567813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.084 qpair failed and we were unable to recover it. 00:37:22.084 [2024-10-13 20:07:11.568061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.568118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.568228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.568267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.568439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.568506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.568644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.568693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.568978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.569053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.569255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.569296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.569523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.569558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.569653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.569685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.569814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.569848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.570059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.570128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.570270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.570307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.570483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.570533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.570724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.570765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.570932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.570971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.571096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.571134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.571269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.571306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.571484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.571533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.571650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.571685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.571821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.571874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.572103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.572157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.572338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.572379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.572519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.572553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.572722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.572760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.572965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.573003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.573183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.573251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.573428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.573465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.573576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.573608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.573714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.573748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.573903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.573966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.574208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.574267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.574429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.574482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.574613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.574662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.574817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.574852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.574951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.575004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.575160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.575212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.575370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.575411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.575562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.575611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.575817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.575899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.576044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.576101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.576217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.085 [2024-10-13 20:07:11.576255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.085 qpair failed and we were unable to recover it. 00:37:22.085 [2024-10-13 20:07:11.576402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.576456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.576614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.576648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.576878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.576940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.577205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.577264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.577445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.577479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.577582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.577615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.577748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.577785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.577975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.578012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.578149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.578202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.578344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.578382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.578550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.578585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.578737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.578775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.578920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.578957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.579179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.579218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.579354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.579389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.579555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.579588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.579750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.579825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.579936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.579973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.580168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.580227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.580386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.580429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.580643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.580676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.580859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.580896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.581039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.581077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.581186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.581224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.581414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.581475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.581630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.581679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.581841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.581896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.582139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.582196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.582319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.582353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.582461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.582494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.582628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.582677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.582822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.582857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.582986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.583021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.583201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.086 [2024-10-13 20:07:11.583239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.086 qpair failed and we were unable to recover it. 00:37:22.086 [2024-10-13 20:07:11.583381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.583446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.583571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.583620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.583825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.583866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.584042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.584080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.584201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.584240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.584392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.584434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.584588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.584636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.584781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.584820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.584942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.584995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.585154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.585192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.585329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.585378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.585534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.585583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.585748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.585784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.585977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.586039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.586196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.586269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.586421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.586475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.586614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.586649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.586842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.586903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.587027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.587111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.587303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.587341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.587488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.587524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.587658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.587693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.587825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.587866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.587968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.588002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.588111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.588145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.588271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.588304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.588442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.588478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.588633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.588683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.588865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.588920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.589163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.589204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.589350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.589388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.589553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.589586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.589760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.589797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.589976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.590033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.590293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.590368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.590505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.590541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.590720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.590776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.591028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.591086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.591303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.591366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.591535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.087 [2024-10-13 20:07:11.591572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.087 qpair failed and we were unable to recover it. 00:37:22.087 [2024-10-13 20:07:11.591707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.591742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.591878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.591913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.592083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.592135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.592331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.592369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.592521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.592569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.592786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.592855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.593102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.593163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.593325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.593359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.593503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.593540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.593761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.593794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.594013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.594050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.594296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.594353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.594498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.594533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.594662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.594717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.594917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.594983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.595116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.595213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.595362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.595407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.595585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.595620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.595729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.595763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.595890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.595925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.596130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.596184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.596366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.596418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.596556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.596597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.596755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.596821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.597008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.597048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.597175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.597227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.597350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.597387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.597549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.597583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.597691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.597741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.597852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.597889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.598006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.598043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.598205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.598240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.598427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.598479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.598573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.598607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.598732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.598781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.599008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.599049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.599278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.599316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.599443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.599494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.599631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.599667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.599903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.599941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.088 [2024-10-13 20:07:11.600100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.088 [2024-10-13 20:07:11.600140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.088 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.600290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.600327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.600469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.600503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.600660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.600694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.600848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.600885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.601055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.601094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.601219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.601258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.601422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.601474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.601608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.601642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.601750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.601802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.601944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.601982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.602101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.602154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.602281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.602318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.602505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.602555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.602677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.602713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.602821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.602876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.602997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.603036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.603152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.603192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.603332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.603383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.603520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.603555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.603681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.603717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.603860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.603897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.604042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.604085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.604253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.604302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.604419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.604458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.604597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.604637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.604793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.604831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.604976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.605014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.605158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.605195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.605309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.605349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.605502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.605551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.605796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.605836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.606087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.606145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.606287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.606325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.606463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.606502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.606655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.606705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.606868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.606983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.607110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.607150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.607329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.607363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.607503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.607537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.089 qpair failed and we were unable to recover it. 00:37:22.089 [2024-10-13 20:07:11.607706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.089 [2024-10-13 20:07:11.607761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.607989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.608048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.608204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.608259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.608447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.608481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.608587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.608621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.608746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.608795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.608936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.608972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.609139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.609176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.609322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.609359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.609505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.609542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.609639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.609673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.609788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.609822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.610037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.610105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.610303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.610341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.610500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.610535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.610669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.610705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.610850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.610882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.611045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.611098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.611212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.611251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.611365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.611407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.611574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.611608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.611766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.611803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.611972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.612015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.612160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.612197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.612332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.612370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.612556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.612605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.612713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.612749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.612933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.612987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.613134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.613188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.613343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.613377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.613500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.613535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.613696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.613736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.613956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.614014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.614188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.614254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.614410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.090 [2024-10-13 20:07:11.614445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.090 qpair failed and we were unable to recover it. 00:37:22.090 [2024-10-13 20:07:11.614553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.614586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.614719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.614756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.614951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.614989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.615124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.615161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.615331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.615368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.615517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.615566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.615715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.615764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.615918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.615971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.616234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.616296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.616433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.616468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.616628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.616662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.616822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.616859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.616980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.617031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.617151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.617191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.617366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.617448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.617612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.617662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.617814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.617868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.617984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.618019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.618158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.618193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.618341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.618390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.618516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.618553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.618671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.618710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.618822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.618868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.618979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.619014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.619154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.619188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.619343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.619377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.619487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.619522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.619670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.619730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.619877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.619915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.620052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.620085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.620243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.620277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.620381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.620424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.620553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.620587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.620694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.620731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.620967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.621004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.621152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.621189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.621305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.621341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.621470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.621504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.621635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.621687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.621900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.621937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.091 [2024-10-13 20:07:11.622137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.091 [2024-10-13 20:07:11.622176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.091 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.622318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.622355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.622506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.622557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.622690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.622738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.622961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.623028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.623205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.623269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.623405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.623440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.623587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.623640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.623874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.623913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.624118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.624181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.624294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.624331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.624492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.624525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.624617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.624650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.624802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.624839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.624969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.625021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.625169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.625206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.625356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.625402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.625554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.625588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.625699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.625754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.625899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.625937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.626067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.626118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.626228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.626265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.626412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.626463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.626565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.626599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.626772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.626826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.626975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.627015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.627152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.627205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.627324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.627367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.627538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.627571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.627696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.627729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.627877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.627913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.628021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.628058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.628213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.628250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.628366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.628412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.628580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.628629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.628738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.628794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.628942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.628981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.629130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.629181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.629361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.629410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.629533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.629567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.092 [2024-10-13 20:07:11.629698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.092 [2024-10-13 20:07:11.629746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.092 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.629974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.630016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.630157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.630194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.630311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.630348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.630511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.630561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.630716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.630764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.630947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.631018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.631201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.631265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.631424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.631459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.631594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.631629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.631768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.631807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.631958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.631997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.632260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.632323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.632469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.632504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.632634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.632683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.632857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.632911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.633114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.633172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.633271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.633305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.633479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.633532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.633684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.633740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.633885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.633938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.634088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.634140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.634264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.634314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.634479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.634529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.634655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.634705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.634855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.634891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.634991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.635026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.635126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.635186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.635366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.635406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.635547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.635581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.635731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.635781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.635973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.636011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.636123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.636161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.636316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.636350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.636469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.636504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.636694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.636748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.636979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.637019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.637140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.637178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.637365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.637413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.637592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.637641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.637846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.093 [2024-10-13 20:07:11.637882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-10-13 20:07:11.638000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.638051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.638333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.638372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.638517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.638562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.638668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.638718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.638824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.638861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.638996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.639033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.639170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.639206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.639375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.639462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.639602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.639652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.639838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.639892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.640113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.640172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.640300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.640338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.640500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.640536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.640687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.640727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.640837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.640873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.641016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.641053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.641198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.641236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.641350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.641386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.641519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.641551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.641663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.641711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.641857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.641893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.642121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.642156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.642298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.642335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.642457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.642509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.642639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.642672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.642781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.642831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.642979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.643022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.643133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.643169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.643326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.643380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.643542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.643591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.643751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.643787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.643980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.644049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.644232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.644271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.644422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.644457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.644562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.644596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.644744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.644781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.644907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.644958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.645114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.094 [2024-10-13 20:07:11.645151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-10-13 20:07:11.645299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.645336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.645550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.645598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.645785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.645822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.645923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.645975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.646180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.646263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.646435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.646468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.646621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.646689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.646909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.646969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.647225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.647280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.647409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.647462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.647579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.647614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.647720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.647755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.647894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.647947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.648197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.648255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.648409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.648461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.648597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.648632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.648773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.648822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.649036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.649075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.649251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.649289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.649408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.649462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.649589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.649640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.649765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.649803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.650046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.650105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.650312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.650376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.650555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.650605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.650781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.650835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.651014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.651073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.651233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.651301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.651460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.651500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.651638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.651691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.651874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.651912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.652050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.652087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.652229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.652266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.652417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.652450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.652581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.652615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.652713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.652765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.652913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.652952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-10-13 20:07:11.653090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.095 [2024-10-13 20:07:11.653128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.653270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.653309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.653491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.653540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.653647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.653684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.653847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.653900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.654087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.654139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.654240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.654275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.654384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.654425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.654558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.654611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.654767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.654801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.654937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.654971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.655196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.655254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.655411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.655462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.655614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.655664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.655820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.655877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.656095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.656155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.656343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.656383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.656563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.656611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.656795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.656851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.656981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.657033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.657209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.657242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.657347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.657382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.657539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.657573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.657675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.657708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.657859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.657892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.658027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.658061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.658185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.658222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.658358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.658403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.658537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.658570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.658745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.658812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.658975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.659028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.659213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.659258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.659454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.659489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.659649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.659700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.659825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.659881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.660033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.660071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.660221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.660258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.660427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.660476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.660623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.660657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.660818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.660872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.661012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.096 [2024-10-13 20:07:11.661046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.096 qpair failed and we were unable to recover it. 00:37:22.096 [2024-10-13 20:07:11.661213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.661251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.661403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.661455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.661593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.661627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.661779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.661817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.661971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.662008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.662176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.662213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.662373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.662419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.662532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.662567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.662759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.662816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.663011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.663063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.663204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.663258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.663390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.663432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.663578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.663631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.663810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.663862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.664012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.664065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.664216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.664251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.664402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.664437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.664581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.664629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.664780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.664815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.664954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.664988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.665116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.665150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.665283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.665323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.665432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.665466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.665597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.665631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.665808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.665860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.666003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.666057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.666206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.666255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.666407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.666444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.666580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.666615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.666835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.666894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.667087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.667161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.667310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.667347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.667483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.667518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.667720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.667774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.667927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.667966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.668114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.668176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.668330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.668364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.668522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.668572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.668714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.668770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.668974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.097 [2024-10-13 20:07:11.669013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.097 qpair failed and we were unable to recover it. 00:37:22.097 [2024-10-13 20:07:11.669212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.669281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.669466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.669501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.669635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.669669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.669772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.669823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.670017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.670070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.670235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.670303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.670477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.670526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.670709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.670749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3160013 Killed "${NVMF_APP[@]}" "$@" 00:37:22.098 [2024-10-13 20:07:11.670932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.670976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.671220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.671275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:22.098 [2024-10-13 20:07:11.671422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.671476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:22.098 [2024-10-13 20:07:11.671603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.671637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:22.098 [2024-10-13 20:07:11.671779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.671831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.098 [2024-10-13 20:07:11.672040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.672097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.672209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.672246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.672413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.672448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.672606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.672641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.672824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.672862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.673035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.673073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.673221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.673259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.673431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.673470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.673608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.673644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.673835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.673874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.674075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.674134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.674278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.674327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.674457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.674491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.674670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.674708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.674822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.674856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.674980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.675014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.675169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.675203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.675334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.675369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.675482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.675517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 [2024-10-13 20:07:11.675650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.675703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3160691 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:22.098 [2024-10-13 20:07:11.675858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.675892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3160691 00:37:22.098 [2024-10-13 20:07:11.676027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.676080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3160691 ']' 00:37:22.098 [2024-10-13 20:07:11.676235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.098 [2024-10-13 20:07:11.676270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.098 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.098 qpair failed and we were unable to recover it. 00:37:22.099 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.099 [2024-10-13 20:07:11.676477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.676513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.099 [2024-10-13 20:07:11.676640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.099 [2024-10-13 20:07:11.676700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 20:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.099 [2024-10-13 20:07:11.676847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.676886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.677424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.677479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.677617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.677653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.677791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.677829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.677999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.678038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.678180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.678217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.678356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.678401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.678545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.678594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.678776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.678825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.678952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.678993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.679209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.679250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.679408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.679463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.679603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.679652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.679815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.679883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.680008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.680060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.680201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.680239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.680391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.680453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.680600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.680634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.680764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.680817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.680990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.681028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.681256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.681294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.681411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.681465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.681596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.681631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.681790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.681825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.681955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.681988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.682126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.682165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.682320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.682355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.682489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.682524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.682683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.682720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.682876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.682910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.099 qpair failed and we were unable to recover it. 00:37:22.099 [2024-10-13 20:07:11.683048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.099 [2024-10-13 20:07:11.683101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.683224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.683264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.683401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.683437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.683617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.683684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.683827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.683890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.684063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.684126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.684267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.684304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.684478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.684512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.684666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.684720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.684829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.684865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.685049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.685102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.685252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.685304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.685441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.685477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.685628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.685682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.685853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.685907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.686071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.686126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.686300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.686338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.686527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.686561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.686739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.686777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.686901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.686952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.687087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.687143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.687258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.687295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.687486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.687536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.687647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.687683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.687820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.687856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.688024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.688078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.688212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.688257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.688390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.688430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.688566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.688601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.688731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.688766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.688879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.688912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.689048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.689082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.689207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.689241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.689369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.689412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.689521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.689556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.689669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.689718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.689863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.689899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.690027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.690082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.690276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.690311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.690416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.690451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.100 [2024-10-13 20:07:11.690585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.100 [2024-10-13 20:07:11.690618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.100 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.690760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.690795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.690902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.690937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.691068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.691102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.691240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.691273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.691435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.691485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.691628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.691666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.691825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.691884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.692108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.692169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.692298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.692335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.692494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.692527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.692657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.692690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.692793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.692827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.692931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.692964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.693116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.693154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.693344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.693408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.693586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.693634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.693775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.693814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.694006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.694045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.694192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.694231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.694371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.694416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.694546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.694580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.694768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.694821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.695016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.695075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.695237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.695305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.695447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.695483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.695619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.695653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.695790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.695824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.695983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.696016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.696164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.696197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.696319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.696367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.696533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.696582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.696750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.696798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.696967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.697001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.697159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.697194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.697294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.697334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.697477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.697512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.697653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.697697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.697853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.697888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.697988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.698025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.101 [2024-10-13 20:07:11.698185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.101 [2024-10-13 20:07:11.698219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.101 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.698399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.698449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.698592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.698626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.698754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.698788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.698895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.698929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.699058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.699092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.699229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.699263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.699402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.699438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.699598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.699632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.699739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.699773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.699920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.699955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.700075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.700109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.700269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.700302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.700435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.700474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.700578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.700623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.700729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.700763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.700868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.700902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.701007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.701040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.701142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.701175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.701288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.701324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.701485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.701534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.701660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.701717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.701823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.701858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.701967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.702002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.702115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.702148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.702259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.702293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.702409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.702445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.702547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.702581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.702717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.702750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.702908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.702942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.703073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.703107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.703246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.703280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.703420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.703456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.703602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.703650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.703819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.703868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.704003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.704045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.704179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.704212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.704329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.704363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.704518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.704553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.704663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.704705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.704840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.102 [2024-10-13 20:07:11.704873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.102 qpair failed and we were unable to recover it. 00:37:22.102 [2024-10-13 20:07:11.704982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.705019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.705183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.705217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.705348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.705390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.705500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.705534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.705643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.705677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.705795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.705829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.705967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.706003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.706103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.706136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.706263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.706312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.706492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.706540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.706703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.706738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.706862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.706897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.707035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.707070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.707204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.707239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.707402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.707437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.707543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.707577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.707731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.707780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.707888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.707923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.708064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.708100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.708239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.708274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.708426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.708474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.708604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.708638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.708786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.708820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.708978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.709011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.709171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.709205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.709310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.709346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.709465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.709500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.709609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.709645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.709809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.709844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.709974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.710008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.710139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.710173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.710308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.710343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.710459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.710493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.710653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.710685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.710814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.710851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.103 [2024-10-13 20:07:11.710965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.103 [2024-10-13 20:07:11.710999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.103 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.711147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.711196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.711352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.711408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.711591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.711640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.711812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.711848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.711962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.711997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.712161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.712196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.712356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.712390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.712540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.712578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.712736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.712786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.712926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.712962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.713124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.713158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.713263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.713297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.713459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.713508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.713650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.713685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.713832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.713868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.714005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.714039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.714170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.714203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.714313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.714348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.714519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.714554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.714683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.714715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.714812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.714846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.714985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.715018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.715152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.715184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.715287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.715322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.715465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.715515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.715646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.715683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.715821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.715857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.716016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.716050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.716155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.716189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.716325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.716360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.716496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.716531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.716683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.716731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.716867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.716904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.717042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.717077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.717208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.717243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.717351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.717384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.717502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.717536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.717672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.717705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.717838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.717877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.104 [2024-10-13 20:07:11.717984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.104 [2024-10-13 20:07:11.718018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.104 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.718153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.718186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.718316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.718349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.718507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.718556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.718699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.718736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.718867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.718901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.719076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.719112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.719239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.719304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.719413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.719449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.719575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.719609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.719745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.719778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.719916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.719950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.720058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.720094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.720237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.720270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.720406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.720440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.720549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.720589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.720747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.720796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.720939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.720975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.721083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.721119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.721278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.721311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.721447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.721481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.721612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.721647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.721802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.721836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.721992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.722026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.722143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.722177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.722341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.722376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.722515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.722564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.722727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.722777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.722915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.722951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.723111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.723144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.723253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.723287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.723422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.723457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.723588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.723622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.723727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.723761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.723891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.723925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.724099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.724147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.724270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.724306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.724467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.724515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.724638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.724676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.724812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.724852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.724988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.105 [2024-10-13 20:07:11.725023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.105 qpair failed and we were unable to recover it. 00:37:22.105 [2024-10-13 20:07:11.725183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.725220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.725372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.725431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.725545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.725580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.725688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.725721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.725857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.725890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.725979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.726012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.726148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.726183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.726331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.726367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.726501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.726550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.726690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.726726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.726830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.726864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.726997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.727030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.727144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.727179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.727289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.727325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.727450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.727484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.727587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.727621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.727749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.727783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.727905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.727939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.728059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.728095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.728208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.728244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.728423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.728473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.728610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.728644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.728806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.728840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.728976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.729010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.729168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.729201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.729342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.729377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.729542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.729590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.729739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.729787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.729935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.729973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.730134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.730170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.730293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.730328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.730493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.730542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.730698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.730747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.730886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.730921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.731035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.731070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.731209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.731243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.731341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.731374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.731509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.731559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.731722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.731764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.731897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.731932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.106 qpair failed and we were unable to recover it. 00:37:22.106 [2024-10-13 20:07:11.732095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.106 [2024-10-13 20:07:11.732129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.732263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.732302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.732476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.732511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.732612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.732646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.732805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.732840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.732948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.732983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.733106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.733154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.733305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.733354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.733519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.733567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.733734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.733769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.733879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.733914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.734047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.734081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.734205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.734240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.734402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.734453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.734609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.734659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.734779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.734814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.734943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.734976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.735106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.735139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.735241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.735273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.735425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.735474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.735638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.735686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.735848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.735884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.736020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.736054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.736197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.736231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.736380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.736422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.736559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.736603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.736754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.736803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.736921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.736957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.737112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.737149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.737313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.737347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.737520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.737554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.737688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.737721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.737857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.737890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.738014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.738047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.738203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.738235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.738370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.107 [2024-10-13 20:07:11.738412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.107 qpair failed and we were unable to recover it. 00:37:22.107 [2024-10-13 20:07:11.738549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.738583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.738745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.738779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.738876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.738915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.739027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.739060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.739219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.739253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.739385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.739425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.739576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.739609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.739756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.739805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.739921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.739957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.740091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.740125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.740281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.740316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.740453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.740487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.740610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.740643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.740753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.740787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.740895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.740929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.741086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.741120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.741226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.741261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.741405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.741453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.741601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.741638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.741774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.741807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.741913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.741946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.742102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.742135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.742268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.742300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.742418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.742468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.742597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.742646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.742794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.742831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.742986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.743020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.743153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.743187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.743349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.743383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.743515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.743550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.743688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.743736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.743881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.743917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.744017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.744051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.744189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.744223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.744381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.744424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.744536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.744571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.744747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.744783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.744889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.744922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.745045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.745078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.745178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.108 [2024-10-13 20:07:11.745211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.108 qpair failed and we were unable to recover it. 00:37:22.108 [2024-10-13 20:07:11.745368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.745415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.745552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.745601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.745720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.745761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.745896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.745930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.746034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.746067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.746226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.746259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.746374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.746417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.746558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.746593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.746735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.746769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.746902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.746934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.747047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.747080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.747211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.747243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.747409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.747459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.747606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.747653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.747817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.747852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.747985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.748019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.748157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.748192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.748328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.748362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.748500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.748535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.748640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.748673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.748808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.748840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.748964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.748997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.749108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.749144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.749281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.749315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.749426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.749462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.749572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.749606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.749730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.749779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.749896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.749932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.750059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.750094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.750228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.750263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.750406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.750440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.750546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.750580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.750704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.750737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.750884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.750917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.751049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.751083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.751188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.751221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.751405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.751455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.751582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.751632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.751771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.751805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.751931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.751965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.109 qpair failed and we were unable to recover it. 00:37:22.109 [2024-10-13 20:07:11.752074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.109 [2024-10-13 20:07:11.752108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.752206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.752239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.752344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.752383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.752521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.752555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.752705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.752743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.752920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.752969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.753147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.753182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.753321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.753355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.753501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.753535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.753688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.753737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.753874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.753908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.754012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.754047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.754184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.754218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.754359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.754398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.754508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.754541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.754651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.754685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.754819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.754853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.754953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.754987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.755112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.755145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.755280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.755319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.755494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.755544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.755660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.755697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.755834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.755867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.755997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.756030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.756132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.756166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.756328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.756363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.756521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.756570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.756704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.756752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.756895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.756932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.757047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.757082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.757220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.757254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.757381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.757427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.757558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.757606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.757736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.757785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.757932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.757968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.758075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.758110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.758270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.758305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.758438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.758486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.758611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.758647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.758785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.758820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.110 [2024-10-13 20:07:11.758956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.110 [2024-10-13 20:07:11.758990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.110 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.759123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.759155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.759279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.759333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.759472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.759521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.759632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.759668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.759767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.759801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.759896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.759929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.760081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.760115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.760207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.760241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.760363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.760408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.760543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.760592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.760730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.760766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.760897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.760929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.761033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.761066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.761202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.761235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.761344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.761377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.761525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.761558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.761690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.761723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.761857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.761890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.762026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.762059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.762192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.762225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.762323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.762355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.762522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.762571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.762691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.762727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.762754] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:37:22.111 [2024-10-13 20:07:11.762837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.762870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 [2024-10-13 20:07:11.762881] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:qpair failed and we were unable to recover it. 00:37:22.111 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:22.111 [2024-10-13 20:07:11.763008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.763041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.763172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.763204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.763358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.763392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.763517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.763552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.763688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.763721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.763851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.763884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.764018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.764051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.764186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.764218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.764323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.764358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.764504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.764539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.764643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.764677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.764785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.764818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.764929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.764963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.765101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.765136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.765265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.111 [2024-10-13 20:07:11.765298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.111 qpair failed and we were unable to recover it. 00:37:22.111 [2024-10-13 20:07:11.765431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.765466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.765586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.765641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.765783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.765819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.765956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.765991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.766151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.766186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.766342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.766377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.766502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.766537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.766683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.766719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.766832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.766866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.767002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.767037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.767188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.767224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.767360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.767402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.767503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.767538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.767710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.767746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.767881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.767916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.768076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.768112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.768243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.768287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.768436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.768486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.768629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.768665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.768779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.768813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.768976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.769010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.769170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.769205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.769355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.769412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.769524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.769560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.769719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.769757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.769868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.769905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.770038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.770073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.770206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.770241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.770423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.770473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.770584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.770623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.770765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.770801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.770918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.770952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.771061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.771095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.771224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.771272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.771423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.771460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.112 qpair failed and we were unable to recover it. 00:37:22.112 [2024-10-13 20:07:11.771615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-10-13 20:07:11.771665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.771829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.771867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.771974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.772021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.772125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.772160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.772334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.772382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.772576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.772627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.772741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.772784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.772919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.772954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.773067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.773101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.773246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.773295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.773452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.773503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.773624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.773661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.773760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.773794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.773900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.773935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.774036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.774069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.774206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.774239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.774383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.774428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.774547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.774581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.774734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.774783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.774932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.774969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.775090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.775125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.775282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.775316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.775497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.775546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.775706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.775755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.775920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.775957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.776089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.776124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.776247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.776281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.776445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.776481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.776640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.776675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.776773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.776807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.776912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.776946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.777139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.777189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.777302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.777338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.777494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.777529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.777668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.777703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.777831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.777864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.777991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.778025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.778132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.778167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.778278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.778311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.778423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-10-13 20:07:11.778456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.113 qpair failed and we were unable to recover it. 00:37:22.113 [2024-10-13 20:07:11.778593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.778626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.778765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.778798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.778933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.778967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.779097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.779130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.779281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.779329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.779480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.779517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.779633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.779673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.779815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.779849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.780006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.780039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.780174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.780210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.780343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.780378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.780545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.780594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.780712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.780746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.780874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.780909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.781001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.781035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.781165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.781199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.781354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.781411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.781529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.781564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.781713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.781748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.781884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.781919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.782033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.782068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.782229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.782261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.782390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.782455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.782565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.782601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.782737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.782771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.782901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.782934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.783040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.783074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.783210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.783244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.783379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.783422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.783550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.783584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.783719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.783759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.783897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.783933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.784059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.784092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.784236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.784269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.784382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.784423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.784530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.784563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.784682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.784715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.784849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.784882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.785018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.785050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.785209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.785244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.785427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.785478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.785614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.785663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.785830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.785864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.786009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-10-13 20:07:11.786042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.114 qpair failed and we were unable to recover it. 00:37:22.114 [2024-10-13 20:07:11.786174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.786206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.786343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.786377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.786513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.786566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.786688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.786724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.786858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.786893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.787018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.787051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.787163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.787197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.787348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.787382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.787517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.787550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.787699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.787749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.787858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.787895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.788035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.788070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.788188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.788222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.788353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.788386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.788503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.788537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.788696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.788729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.788899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.788933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.789067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.789101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.789201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.789235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.789363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.789419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.789543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.789579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.789718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.789755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.789889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.789923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.790087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.790122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.790228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.790261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.790388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.790434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.790566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.790600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.790704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.790737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.790837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.790871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.790981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.791020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.791128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.791165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.791304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.791336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.791455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.791487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.791622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.791655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.791792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.791825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.791929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.791962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.792121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.792153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.792298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.792343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.792475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.792510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.792646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.792683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.792817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.792851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.792986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.793019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.793153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.115 [2024-10-13 20:07:11.793193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.115 qpair failed and we were unable to recover it. 00:37:22.115 [2024-10-13 20:07:11.793294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.793327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.793497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.793546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.793702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.793758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.793912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.793948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.794080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.794113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.794214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.794248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.794389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.794429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.794530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.794564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.794678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.794712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.794854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.794890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.795038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.795073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.795217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.795256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.795419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.795455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.795579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.795627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.795745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.795779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.795917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.795951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.796108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.796142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.796256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.796304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.796431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.796468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.796605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.796639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.796778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.796811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.796966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.796999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.797140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.797175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.797285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.797319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.797441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.797474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.797582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.797614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.797785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.797830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.797963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.797997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.798102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.798136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.798294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.798328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.798447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.798486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.798610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.798648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.798794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.798829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.799005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.799040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.799140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.799173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.799310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.799343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.799468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.799502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.799635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.799667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.799809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.799845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.799968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.800011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.116 qpair failed and we were unable to recover it. 00:37:22.116 [2024-10-13 20:07:11.800170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.116 [2024-10-13 20:07:11.800204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.800330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.800363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.800490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.800525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.800688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.800736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.800883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.800918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.801062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.801099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.801262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.801295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.801429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.801463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.801594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.801628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.801745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.801779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.801913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.801947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.802063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.802098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.802248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.802284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.802392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.802435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.802553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.802586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.802703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.802737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.802898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.802931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.803094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.803130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.803265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.803298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.803414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.803448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.803554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.803591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.803754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.803787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.803918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.803951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.804084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.804118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.804234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.804271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.804386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.804441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.804566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.804602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.804723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.804758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.804918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.804952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.805060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.805092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.805252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.805285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.805404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.805438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.805541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.805574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.805684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.805718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.805872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.805906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.806034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.806067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.806212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.806248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.806387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.806432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.806554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.806587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.806717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.806755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.806918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.806952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.807109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.807142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.807241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.807276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.807422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.807456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.807558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.807591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.807719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.807752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.807872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.807905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.117 qpair failed and we were unable to recover it. 00:37:22.117 [2024-10-13 20:07:11.808007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.117 [2024-10-13 20:07:11.808040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.808168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.808201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.808299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.808332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.808457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.808492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.808617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.808650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.808806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.808844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.808991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.809025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.809138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.809171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.809316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.809349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.809471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.809504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.809610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.809643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.809766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.809799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.809893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.809926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.810088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.810121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.810231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.810267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.810434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.810493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.810651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.810699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.810827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.810861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.810973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.811006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.811136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.811175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.811308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.811353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.811511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.811547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.811652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.811689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.811828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.811862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.811993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.812025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.812187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.812222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.812340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.812390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.812559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.812594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.812707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.812743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.812904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.812938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.813073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.813107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.813206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.813240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.813410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.813463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.813586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.813624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.813767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.813802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.813927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.813961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.814062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.814096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.814234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.814269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.814419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.814468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.814588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.814623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.814753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.814790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.814954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.814987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.815120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.815155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.815266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.815300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.815440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.815478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.815613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.815661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.815846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.815881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.118 qpair failed and we were unable to recover it. 00:37:22.118 [2024-10-13 20:07:11.815990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.118 [2024-10-13 20:07:11.816023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.816135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.816168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.816299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.816332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.816481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.816515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.816617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.816650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.816755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.816788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.816915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.816948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.817101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.817135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.817244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.817278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.817452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.817486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.817591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.817624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.817740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.817775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.817892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.817929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.818066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.818099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.818234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.818267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.818408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.818443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.818553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.818586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.818691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.818733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.818893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.818927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.819063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.819096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.819231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.819265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.819410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.819443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.819574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.819608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.819748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.819796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.819956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.819992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.820127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.820167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.820312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.820347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.820464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.820498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.820635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.820668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.820801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.820835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.820972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.821006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.821155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.821203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.821320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.821356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.821517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.821565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.821688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.821724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.821858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.821891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.821999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.822032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.822164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.822197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.822317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.822354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.822489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.822525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.822661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.822695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.822831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.822867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.822997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.823030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.823187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.823221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.823351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.823388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.823513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.823562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.823736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.823784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.823962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.823998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.824159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.119 [2024-10-13 20:07:11.824192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.119 qpair failed and we were unable to recover it. 00:37:22.119 [2024-10-13 20:07:11.824303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.824336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.824460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.824495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.824625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.824658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.824820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.824854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.824988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.825025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.825140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.825177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.825333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.825367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.825503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.825541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.825693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.825729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.825867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.825901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.826051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.826085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.826216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.826261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.826392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.826437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.826567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.826600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.826737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.826771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.826909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.826943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.827083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.827124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.827287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.827321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.827485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.827533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.827707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.827744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.827856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.827891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.827996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.828030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.828189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.828223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.828384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.828440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.828587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.828623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.828776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.828813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.828988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.829022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.829180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.829213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.829339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.829373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.829535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.829569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.829712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.829745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.829886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.829919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.830032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.830066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.830169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.830203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.830362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.830411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.830568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.830601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.830732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.830764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.830873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.830907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.831067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.831101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.831203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.831235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.831332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.831365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.831490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.831524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.831651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.831699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.831820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.831855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.831998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.832034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.832176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.120 [2024-10-13 20:07:11.832209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.120 qpair failed and we were unable to recover it. 00:37:22.120 [2024-10-13 20:07:11.832318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.832351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.832482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.832515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.832625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.832658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.832760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.832793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.832902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.832937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.833100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.833135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.833316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.833365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.833548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.833597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.833729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.833764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.833894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.833928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.834028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.834067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.834168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.834201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.834362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.834410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.834540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.834573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.834687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.834720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.834888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.834921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.835061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.835093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.835209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.835245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.835354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.835391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.835501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.835535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.835667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.835701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.835841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.835874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.836040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.836073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.836215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.836248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.836391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.836433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.836577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.836611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.836734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.836767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.836890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.836923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.837030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.837066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.837211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.837246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.837384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.837430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.837539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.837573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.837698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.837732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.837900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.837934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.838081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.838115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.838246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.838279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.838418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.838453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.838570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.838612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.838713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.838746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.838874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.838907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.839067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.839100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.839195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.839227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.839335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.839370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.839523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.839564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.839698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.839733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.839861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.839895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.840018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.840054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.840164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.840200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.840330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.840365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.840506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.840539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.121 qpair failed and we were unable to recover it. 00:37:22.121 [2024-10-13 20:07:11.840646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.121 [2024-10-13 20:07:11.840679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.840816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.840849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.841013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.841046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.841198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.841231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.841369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.841411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.841566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.841605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.841737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.841785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.841926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.841960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.842096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.842130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.842254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.842287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.842432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.842480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.842632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.842668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.842806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.842841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.842972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.843006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.843150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.843183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.843325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.843374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.843502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.843536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.843641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.843675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.843836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.843869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.843970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.844003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.844147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.844195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.844347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.844383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.844525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.844558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.844696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.844729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.844874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.844908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.845036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.845069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.845233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.845270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.845417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.845470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.845597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.845645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.845821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.845857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.846020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.846053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.846163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.846197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.846376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.846419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.846593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.846630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.846790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.846824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.846987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.847021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.847138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.847171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.847325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.847374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.847530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.847566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.847676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.847709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.847826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.847862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.848000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.848034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.848133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.848167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.848306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.848342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.848508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.848555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.848684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.848731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.848879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.848915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.849019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.849053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.849185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.849218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.849351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.122 [2024-10-13 20:07:11.849385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.122 qpair failed and we were unable to recover it. 00:37:22.122 [2024-10-13 20:07:11.849501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.849534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.849663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.849711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.849855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.849890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.850003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.850036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.850177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.850212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.850354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.850409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.850529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.850567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.850673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.850707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.850866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.850899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.851036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.851069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.851240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.851276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.851426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.851462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.851583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.851631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.851756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.851792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.851892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.851926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.852073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.852108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.852215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.852248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.852384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.852430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.852546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.852580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.852691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.852727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.852857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.852890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.853031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.853065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.853196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.853229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.853342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.853375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.853548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.853584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.853717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.853751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.853883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.853916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.854048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.854081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.854197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.854231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.854365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.854406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.854564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.854598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.854738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.854771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.854903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.854937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.855056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.855091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.855216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.855253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.855410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.855459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.855583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.855620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.855782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.855816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.855955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.855989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.856106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.856141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.856303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.856351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.856471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.856506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.123 [2024-10-13 20:07:11.856613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.123 [2024-10-13 20:07:11.856646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.123 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.856815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.856848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.856961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.856997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.857155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.857190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.857330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.857366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.857515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.857548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.857651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.857684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.857814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.857848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.857994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.858028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.858160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.858193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.858300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.858334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.858452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.858486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.858617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.858665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.858785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.858820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.858960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.858995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.859135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.859174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.859284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.859318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.859476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.859518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.859640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.859683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.859790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.859824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.859953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.859986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.860088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.860120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.860250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.860282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.860419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.860453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.860560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.860593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.860711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.860745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.860876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.860909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.861035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.861068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.861196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.861229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.861390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.861430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.861580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.124 [2024-10-13 20:07:11.861613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.124 qpair failed and we were unable to recover it. 00:37:22.124 [2024-10-13 20:07:11.861752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.861785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.861884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.861917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.862017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.862050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.862200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.862233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.862405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.862439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.862598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.862631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.862769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.862812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.862976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.863009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.863148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.863181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.863292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.863325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.863473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.863506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.863624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.863657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.863767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.863801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.863933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.863966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.864092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.864125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.864240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.864273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.864385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.864429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.864540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.864574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.864671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.864704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.864844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.864877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.865018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.865052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.865186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.865218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.865319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.865352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.865502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.865541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.865651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.865700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.865817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.865851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.865983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.866023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.866128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.866161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.866304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.866338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.866455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.866490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.866623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.866670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.866830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.866875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.867024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.867058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.867174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.867208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.867366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.867412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.867562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.867595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.867718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.867753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.867905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.867938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.868066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.868100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.125 [2024-10-13 20:07:11.868214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.125 [2024-10-13 20:07:11.868247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.125 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.868354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.868388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.868545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.868579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.868721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.868755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.868854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.868897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.869018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.869052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.869169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.869204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.869302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.869341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.869461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.869496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.869629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.869664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.869778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.869820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.869986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.870020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.870126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.870160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.870291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.870324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.870451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.870485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.870615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.870648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.870769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.870806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.870951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.870986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.871148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.871181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.126 [2024-10-13 20:07:11.871281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.126 [2024-10-13 20:07:11.871326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.126 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.871478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.871515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.871622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.871658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.871764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.871798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.871910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.871954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.872097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.872131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.872293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.872332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.872488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.872523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.872653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.872687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.872841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.872875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.873014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.873060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.873199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.873233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.873363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.873414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.873545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.873577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.873714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.873755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.873858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.873892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.874023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.874056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.874190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.874225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.874366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.874416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.874526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.874560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.874680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.874720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.874819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.874853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.874989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.875022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.875139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.875174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.875304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.875337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.875445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.875478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.875594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.875627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.875742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.875776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.875881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.875923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.876023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.876058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.876155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.876188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.402 qpair failed and we were unable to recover it. 00:37:22.402 [2024-10-13 20:07:11.876293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.402 [2024-10-13 20:07:11.876327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.876440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.876474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.876601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.876649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.876785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.876830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.876998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.877033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.877190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.877224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.877333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.877366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.877534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.877575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.877693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.877727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.877856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.877890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.878108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.878142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.878277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.878310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.878455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.878489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.878596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.878629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.878762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.878796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.878903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.878946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.879080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.879115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.879254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.879288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.879444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.879493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.879620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.879655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.879785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.879828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.879982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.880022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.880165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.880203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.880316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.880350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.880473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.880509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.880631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.880668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.880793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.880829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.880930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.880964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.881064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.881097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.881240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.881273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.881371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.881412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.881556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.881591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.881708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.881754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.881887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.881923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.882081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.882115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.882247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.882281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.882421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.882458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.882573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.403 [2024-10-13 20:07:11.882608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.403 qpair failed and we were unable to recover it. 00:37:22.403 [2024-10-13 20:07:11.882758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.882792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.882920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.882953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.883083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.883116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.883257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.883291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.883416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.883451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.883565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.883599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.883731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.883765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.883882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.883916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.884020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.884053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.884164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.884197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.884331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.884366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.884519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.884554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.884655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.884690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.884797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.884832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.884964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.884998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.885102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.885136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.885271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.885304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.885427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.885466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.885586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.885620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.885733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.885773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.885909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.885942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.886082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.886116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.886227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.886261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.886427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.886476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.886626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.886663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.886805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.886840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.886964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.886997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.887130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.887163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.887296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.887331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.887476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.887510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.887644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.887683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.887796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.887829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.887961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.887995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.888135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.404 [2024-10-13 20:07:11.888169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.404 qpair failed and we were unable to recover it. 00:37:22.404 [2024-10-13 20:07:11.888302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.888336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.888488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.888521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.888637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.888681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.888840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.888874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.889021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.889055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.889163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.889231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.889335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.889369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.889517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.889551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.889713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.889746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.889877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.889910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.890042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.890076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.890170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.890204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.890327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.890375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.890692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.890728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.890866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.890900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.891006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.891040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.891174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.891207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.891343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.891387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.891556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.891589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.891725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.891758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.891874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.891908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.892010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.892044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.892201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.892234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.892371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.892423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.892558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.892591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.892714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.892747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.892855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.892890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.893028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.893062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.893199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.893232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.893371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.893412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.893524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.893558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.893670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.893703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.893915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.893950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.894080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.894113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.894248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.894283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.894383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.894425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.894530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.894563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.894704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.405 [2024-10-13 20:07:11.894738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.405 qpair failed and we were unable to recover it. 00:37:22.405 [2024-10-13 20:07:11.894881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.894914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.895045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.895078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.895201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.895235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.895369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.895410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.895527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.895559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.895672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.895706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.895870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.895903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.896003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.896036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.896165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.896198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.896298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.896330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.896440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.896473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.896598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.896631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.896810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.896856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.896980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.897021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.897161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.897199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.897350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.897389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.897513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.897549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.897679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.897726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.897862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.897898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.898081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.898128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.898251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.898287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.898427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.898462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.898599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.898631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.898728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.898762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.898899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.898933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.899033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.899070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.899206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.899239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.899342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.899378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.899508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.899557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.899722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.899767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-10-13 20:07:11.899915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.406 [2024-10-13 20:07:11.899955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.900068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.900105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.900222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.900257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.900403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.900438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.900543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.900575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.900685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.900721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.900860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.900895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.901070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.901118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.901234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.901268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.901378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.901420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.901537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.901569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.901698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.901730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.901829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.901862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.901992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.902025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.902155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.902188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.902327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.902361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.902515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.902564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.902708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.902745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.902850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.902885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.903019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.903052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.903187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.903220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.903356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.903390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.903506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.903540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.903671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.903719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.903878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.903921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.905410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.905468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.905599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.905639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.905813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.905853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.905974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.906011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.906176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.906214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.906360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.906405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.906531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.906567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.906721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.906768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-10-13 20:07:11.906904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.407 [2024-10-13 20:07:11.906939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.907054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.907087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.907223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.907262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.907365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.907415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.907575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.907608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.907727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.907760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.907901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.907935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.908075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.908107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.908240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.908273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.908380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.908425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.908552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.908584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.908694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.908728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.908823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.908857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.908956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.908988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.909100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.909133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.909231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.909264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.909360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.909401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.909512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.909547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.909680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.909713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.909812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.909845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.909993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.910026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.910177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.910225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.910356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.910423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.910586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.910628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.910811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.910852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.911001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.911041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.913411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.913455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.913607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.913647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.913806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.913846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.914039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.914086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.914197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.914232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.914371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.914411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.914555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.914588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.914720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.914754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.914799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:22.408 [2024-10-13 20:07:11.914893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.914925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.915059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.915093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.915223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.408 [2024-10-13 20:07:11.915256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-10-13 20:07:11.915398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.915433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.915568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.915602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.915702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.915735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.915845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.915877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.915978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.916011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.916124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.916171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.916326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.916386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.916555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.916598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.916747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.916787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.916919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.916958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.917064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.917100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.917213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.917249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.917380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.917421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.917527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.917562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.917702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.917736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.917865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.917898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.918057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.918090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.918262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.918296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.918430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.918465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.918608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.918656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.918815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.918858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.918978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.919017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.919201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.919239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.919383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.919427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.919547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.919595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.919733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.919767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.919895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.919928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.920037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.920072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.920207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.920240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.920379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.920421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.920552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.920585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.920696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.920729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.920857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.920890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.921023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.921057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.921157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.921190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.921316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.921348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.921489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.921523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.921687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.409 [2024-10-13 20:07:11.921721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.409 qpair failed and we were unable to recover it. 00:37:22.409 [2024-10-13 20:07:11.921825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.921858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.922020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.922052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.922196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.922229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.922331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.922363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.922475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.922508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.922612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.922645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.922805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.922838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.922945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.922992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.923110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.923143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.923278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.923311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.923523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.923556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.923683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.923715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.923829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.923862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.923994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.924026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.924158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.924191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.924316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.924349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.924481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.924514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.924621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.924655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.924790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.924823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.924990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.925023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.925150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.925183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.925295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.925328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.925471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.925504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.925604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.925637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.925731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.925763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.925921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.925955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.926118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.926162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.926275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.926309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.926450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.926483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.926584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.926617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.926731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.926763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.926924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.926957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.927062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.927095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.927198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.927231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.927363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.927409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.927618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.927651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.927785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.927817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.928026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.928059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.410 [2024-10-13 20:07:11.928192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.410 [2024-10-13 20:07:11.928225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.410 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.928348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.928387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.928510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.928543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.928658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.928700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.928811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.928843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.929006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.929039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.929170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.929203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.929410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.929443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.929601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.929634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.929800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.929838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.929965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.930004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.930121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.930154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.930259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.930292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.930466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.930517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.930681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.930726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.930859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.930907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.931025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.931061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.931159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.931193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.931331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.931365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.931497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.931532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.931664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.931704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.931835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.931868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.932027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.932061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.932174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.932208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.932349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.932384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.932549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.932583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.932728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.932771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.932919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.932952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.933090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.933123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.933232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.933264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.933425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.411 [2024-10-13 20:07:11.933459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.411 qpair failed and we were unable to recover it. 00:37:22.411 [2024-10-13 20:07:11.933597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.933631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.933755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.933789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.933942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.933975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.934080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.934113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.934252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.934286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.934407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.934442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.934554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.934589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.934738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.934771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.934909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.934944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.935076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.935109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.935238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.935271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.935385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.935424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.935550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.935583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.935755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.935788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.935894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.935928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.936085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.936120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.936264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.936297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.936428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.936462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.936593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.936633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.936750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.936784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.936925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.936959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.937090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.937123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.937234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.937269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.937385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.937425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.937538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.937572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.937710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.937743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.937906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.937939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.938095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.938127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.938277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.938311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.938463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.938496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.938612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.938647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.938834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.938868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.938978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.939023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.939187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.939220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.939348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.412 [2024-10-13 20:07:11.939387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.412 qpair failed and we were unable to recover it. 00:37:22.412 [2024-10-13 20:07:11.939547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.939580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.939696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.939730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.939894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.939927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.940057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.940089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.940196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.940229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.940334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.940366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.940530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.940586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.940721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.940755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.940859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.940892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.941008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.941041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.941185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.941219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.941315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.941348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.941471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.941504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.941604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.941638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.941775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.941807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.941945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.941978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.942083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.942117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.942273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.942306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.942436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.942469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.942575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.942608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.942740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.942773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.942876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.942909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.943018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.943061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.943194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.943247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.943390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.943440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.943596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.943636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.943786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.943825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.943985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.944045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.944193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.944231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.944346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.944384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.944499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.944536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.944701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.944739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.944888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.944926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.945054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.945092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.945207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.945241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.945348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.945389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.945504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.413 [2024-10-13 20:07:11.945538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.413 qpair failed and we were unable to recover it. 00:37:22.413 [2024-10-13 20:07:11.945635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.945669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.945807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.945840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.945976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.946010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.946104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.946137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.946273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.946321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.946454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.946491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.946597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.946630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.946763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.946797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.946927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.946960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.947059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.947091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.947247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.947280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.947440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.947474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.947603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.947636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.947794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.947827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.947923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.947957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.948068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.948102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.948230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.948265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.948418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.948453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.948583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.948617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.948818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.948851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.948995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.949028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.949159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.949192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.949310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.949345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.949506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.949540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.949752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.949785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.949889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.949922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.950056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.950094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.950204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.950237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.950360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.950399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.950559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.950593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.950733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.950767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.950971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.951003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.951163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.951196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.951328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.951362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.951497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.951545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.951685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.951728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.951853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.951892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.952032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.414 [2024-10-13 20:07:11.952070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.414 qpair failed and we were unable to recover it. 00:37:22.414 [2024-10-13 20:07:11.952211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.952249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.952354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.952403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.952522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.952556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.952655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.952700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.952834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.952868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.953003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.953036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.953133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.953167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.953300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.953334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.953473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.953507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.953638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.953671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.953805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.953838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.953996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.954029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.954136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.954170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.954269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.954301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.954437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.954471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.954653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.954708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.954817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.954852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.954994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.955028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.955159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.955192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.955315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.955348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.955462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.955496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.955610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.955644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.955785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.955818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.955974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.956007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.956096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.956128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.956218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.956250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.956386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.956431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.956533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.956567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.956702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.956758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.956919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.956962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.958410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.958468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.958643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.958684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.958830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.958867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.958976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.959011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.959154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.959186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.415 [2024-10-13 20:07:11.959301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.415 [2024-10-13 20:07:11.959335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.415 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.959478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.959512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.959642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.959689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.959803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.959837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.959975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.960009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.960117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.960151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.960254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.960288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.960408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.960442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.960581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.960616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.960729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.960762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.960923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.960956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.961062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.961103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.961208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.961241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.961391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.961430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.961585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.961618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.961742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.961776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.961910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.961943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.962104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.962140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.962249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.962283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.962421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.962455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.962589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.962623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.962755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.962788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.962969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.963005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.963137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.963182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.963323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.963356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.963474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.963508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.963621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.963656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.963794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.963827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.963965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.963998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.964136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.964168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.964300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.416 [2024-10-13 20:07:11.964333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.416 qpair failed and we were unable to recover it. 00:37:22.416 [2024-10-13 20:07:11.964477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.964511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.964668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.964701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.964799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.964836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.964940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.964973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.965104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.965139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.965297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.965330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.965465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.965498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.965595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.965628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.965754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.965787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.965897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.965930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.966067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.966100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.966212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.966246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.966408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.966441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.966542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.966575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.966727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.966760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.966893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.966926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.967059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.967092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.967226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.967259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.967397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.967431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.967546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.967578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.967685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.967717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.967877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.967909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.968049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.968083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.968212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.968246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.968354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.968387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.968493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.968527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.968653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.968686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.968804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.968837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.968998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.969032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.969194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.969227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.969323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.969355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.969470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.969503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.969610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.969643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.969754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.969787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.969928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.969961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.970084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.970117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.970216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.970249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.970381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.417 [2024-10-13 20:07:11.970422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.417 qpair failed and we were unable to recover it. 00:37:22.417 [2024-10-13 20:07:11.970557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.970590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.970720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.970753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.970847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.970880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.970975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.971008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.971175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.971213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.971314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.971348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.971463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.971496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.971600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.971633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.971799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.971832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.971931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.971965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.972076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.972109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.972221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.972255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.972356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.972389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.972564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.972597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.972725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.972758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.972884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.972917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.973047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.973079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.973185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.973219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.973352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.973385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.973498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.973531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.973670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.973702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.973835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.973868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.973969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.974002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.974099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.974133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.974237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.974270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.974424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.974459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.974561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.974594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.974745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.974794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.974932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.974976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.975123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.975157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.975288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.975321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.975460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.975493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.975605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.975638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.975731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.975764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.975915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.975948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.976062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.976095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.418 [2024-10-13 20:07:11.976196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.418 [2024-10-13 20:07:11.976228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.418 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.976329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.976363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.976529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.976573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.976725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.976773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.976888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.976923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.977016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.977050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.977186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.977220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.977353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.977386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.977506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.977549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.977703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.977745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.977887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.977924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.978027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.978061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.978170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.978203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.978330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.978363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.978510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.978544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.978653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.978687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.978831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.978865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.978968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.979001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.979136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.979169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.979273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.979306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.979406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.979439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.979539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.979573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.979685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.979718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.979822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.979855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.980017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.980052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.980168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.980203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.980344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.980378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.980521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.980553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.980662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.980695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.980820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.980853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.980945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.980978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.981085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.981118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.983407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.983455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.983644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.983695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.983836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.983885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.984089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.984136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.984281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.984317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.984448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.984483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.419 qpair failed and we were unable to recover it. 00:37:22.419 [2024-10-13 20:07:11.984622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.419 [2024-10-13 20:07:11.984655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.984811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.984844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.984941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.984974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.985082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.985115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.985241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.985274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.985431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.985480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.985590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.985626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.985732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.985767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.985904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.985938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.986069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.986103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.986251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.986303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.986416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.986451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.986584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.986617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.986751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.986784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.986946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.986980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.987085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.987118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.987257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.987290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.987445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.987479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.987609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.987642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.987774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.987806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.987961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.987994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.988127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.988160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.988290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.988322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.988455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.988502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.988621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.988657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.988799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.988835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.988972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.989005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.989113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.989146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.989245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.989278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.989383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.989424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.989558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.989591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.989718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.989751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.989848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.989881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.989977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.990012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.990143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.990175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.990301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.990334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.420 qpair failed and we were unable to recover it. 00:37:22.420 [2024-10-13 20:07:11.990426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.420 [2024-10-13 20:07:11.990459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.990602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.990651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.990768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.990804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.990941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.990975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.991111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.991145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.991288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.991324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.991483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.991517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.991633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.991668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.991772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.991805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.991915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.991948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.992038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.992072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.992180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.992216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.992359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.992420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.992561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.992596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.992812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.992853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.992967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.993000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.993128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.993161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.993291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.993325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.993452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.993490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.993612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.993645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.993772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.993805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.994024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.994057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.994189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.994233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.994362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.994402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.994499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.994532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.994682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.994731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.994890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.994938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.995059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.421 [2024-10-13 20:07:11.995093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.421 qpair failed and we were unable to recover it. 00:37:22.421 [2024-10-13 20:07:11.995238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.995272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.995415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.995448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.995614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.995663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.995805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.995841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.995956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.995990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.996095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.996129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.996280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.996328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.996478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.996514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.996654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.996689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.996821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.996854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.996988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.997020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.997115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.997148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.997266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.997313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.997445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.997493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.997620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.997668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.997780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.997814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.997948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.997983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.998091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.998125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.998260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.998292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.998420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.998469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.998604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.998642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.998778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.998812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.998911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.998945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.999081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.999115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.999231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.999279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.999400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.999435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.999544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.999583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.999683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.999716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.999817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:11.999850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:11.999987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:12.000019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:12.000134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:12.000170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:12.000311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:12.000345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:12.000493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:12.000528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:12.000658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:12.000695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:12.000794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:12.000828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:12.000959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.422 [2024-10-13 20:07:12.000992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.422 qpair failed and we were unable to recover it. 00:37:22.422 [2024-10-13 20:07:12.001152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.001186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.001315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.001348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.001501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.001549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.001665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.001700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.001841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.001874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.001973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.002005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.002128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.002161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.002293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.002327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.002443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.002478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.002622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.002661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.002797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.002832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.002988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.003021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.003147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.003180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.003282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.003314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.003460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.003495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.003601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.003636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.003767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.003800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.003915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.003950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.004084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.004117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.004223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.004256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.004386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.004434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.004560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.004592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.004718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.004751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.004898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.004931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.005038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.005072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.005206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.005239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.005339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.005374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.005535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.005583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.005757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.005806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.005972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.006008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.006116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.006152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.006265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.006301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.006462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.006498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.006627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.006661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.006796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.006830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.006963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.006996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.007166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.007214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.007333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.007380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.007530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.423 [2024-10-13 20:07:12.007565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.423 qpair failed and we were unable to recover it. 00:37:22.423 [2024-10-13 20:07:12.007676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.007711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.007817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.007850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.008006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.008039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.008142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.008176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.008289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.008344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.008490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.008539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.008692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.008728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.008859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.008892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.009020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.009053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.009186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.009220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.009354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.009388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.009559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.009596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.009745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.009779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.009903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.009936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.010065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.010098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.010230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.010264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.010390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.010444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.010586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.010621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.010729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.010770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.010909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.010942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.011071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.011105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.011239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.011273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.011430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.011464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.011616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.011664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.011810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.011845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.012052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.012086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.012227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.012260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.012363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.012402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.012540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.012575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.012688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.012723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.012856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.012890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.012996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.013029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.013161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.013194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.013323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.013356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.013494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.013529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.013744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.013777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.013905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.013938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.424 qpair failed and we were unable to recover it. 00:37:22.424 [2024-10-13 20:07:12.014041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.424 [2024-10-13 20:07:12.014074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.014205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.014240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.014368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.014407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.014513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.014547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.014643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.014686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.014822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.014856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.014960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.014992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.015161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.015196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.015419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.015453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.015581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.015614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.015761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.015794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.015894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.015926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.016062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.016095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.016197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.016230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.016327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.016360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.016495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.016529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.016655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.016688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.016788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.016821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.016977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.017010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.017109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.017141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.017242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.017276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.017386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.017436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.017538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.017571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.017724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.017772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.017914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.017950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.018085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.018119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.018260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.018294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.018406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.018462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.018596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.018630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.018773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.018806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.018935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.018968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.019126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.019159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.425 qpair failed and we were unable to recover it. 00:37:22.425 [2024-10-13 20:07:12.019262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.425 [2024-10-13 20:07:12.019296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.019458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.019491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.019619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.019652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.019787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.019820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.019960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.019993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.020105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.020138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.020236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.020271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.020404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.020438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.020539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.020572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.020733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.020766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.020893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.020926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.021063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.021096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.021209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.021243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.021377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.021418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.021547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.021580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.021725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.021759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.021923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.021968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.022177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.022211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.022348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.022383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.022545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.022593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.022731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.022780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.022926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.022960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.023076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.023112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.023217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.023250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.023408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.023441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.023544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.023576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.023709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.023742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.023867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.023900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.024025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.024058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.024159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.024203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.024435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.024483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.024665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.024714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.024854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.024888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.024991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.025024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.025150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.025184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.426 [2024-10-13 20:07:12.025287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.426 [2024-10-13 20:07:12.025320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.426 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.025419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.025452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.025600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.025648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.025796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.025832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.025940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.025973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.026076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.026110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.026242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.026275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.026374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.026415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.026537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.026572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.026700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.026732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.026857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.026890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.027006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.027039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.027135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.027167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.027311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.027359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.027501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.027535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.027643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.027676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.027885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.027917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.028023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.028056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.028258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.028291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.028390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.028433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.028566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.028599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.028715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.028749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.028850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.028882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.028987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.029022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.029130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.029163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.029290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.029323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.029451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.029485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.029637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.029685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.029793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.029828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.029932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.029966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.030067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.030100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.030235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.030269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.030406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.030439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.030573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.030607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.030715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.030754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.030885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.030919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.427 [2024-10-13 20:07:12.031025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.427 [2024-10-13 20:07:12.031059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.427 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.031194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.031229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.031364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.031403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.031531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.031565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.031680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.031729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.031895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.031932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.032041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.032075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.032183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.032215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.032312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.032345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.032507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.032541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.032642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.032675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.032775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.032808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.032923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.032956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.033091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.033125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.033233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.033266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.033374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.033415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.033547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.033580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.033711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.033745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.033851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.033885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.034040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.034072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.034204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.034239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.034348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.034380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.034523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.034558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.034663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.034696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.034799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.034833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.034998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.035031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.035157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.035190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.035316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.035349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.035491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.035526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.035647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.035681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.035790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.035823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.035980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.036013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.036142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.036175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.036311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.036343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.036500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.036536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.036649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.036682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.428 qpair failed and we were unable to recover it. 00:37:22.428 [2024-10-13 20:07:12.036775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.428 [2024-10-13 20:07:12.036808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.036907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.036940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.037097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.037135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.037233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.037266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.037378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.037420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.037553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.037586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.037720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.037754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.037923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.037957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.038104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.038152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.038267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.038302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.038439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.038473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.038607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.038640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.038777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.038810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.038915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.038948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.039060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.039095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.039227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.039259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.039381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.039425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.039577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.039612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.039747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.039793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.039896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.039930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.040089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.040123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.040221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.040255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.040391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.040434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.040538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.040571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.040708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.040740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.040837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.040869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.041015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.041047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.041176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.041210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.041321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.041369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.041530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.041565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.041671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.041705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.041814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.041847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.042007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.042040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.042188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.042221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.042349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.429 [2024-10-13 20:07:12.042382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.429 qpair failed and we were unable to recover it. 00:37:22.429 [2024-10-13 20:07:12.042500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.042538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.042641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.042675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.042782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.042815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.042956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.042990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.043097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.043129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.043264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.043300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.043435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.043470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.043597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.043636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.043738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.043771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.043912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.043947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.044083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.044116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.044231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.044263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.044377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.044416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.044517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.044550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.044710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.044744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.044874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.044907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.045040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.045073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.045171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.045203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.045328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.045361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.045522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.045555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.045669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.045717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.045872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.045907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.046011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.046044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.046175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.046209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.046344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.046377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.046480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.046513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.046640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.046674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.046825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.430 [2024-10-13 20:07:12.046877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.430 qpair failed and we were unable to recover it. 00:37:22.430 [2024-10-13 20:07:12.047005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.047041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.047178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.047212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.047385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.047425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.047521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.047564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.047664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.047697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.047856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.047889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.047990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.048022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.048149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.048182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.048296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.048344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.048474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.048510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.048642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.048675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.048809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.048842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.048978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.049012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.049149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.049186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.049296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.049331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.049465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.049499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.049631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.049664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.049773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.049769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:22.431 [2024-10-13 20:07:12.049808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 [2024-10-13 20:07:12.049816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.049839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:22.431 [2024-10-13 20:07:12.049865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:22.431 [2024-10-13 20:07:12.049883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:22.431 [2024-10-13 20:07:12.049921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.049954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.050064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.050098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.050229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.050262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.050408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.050443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.050574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.050607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.050753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.050786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.050904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.050937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.431 [2024-10-13 20:07:12.051042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.431 [2024-10-13 20:07:12.051077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.431 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.051237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.051270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.051417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.051450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.051555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.051588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.051737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.051785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.051952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.051988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.052108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.052142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.052272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.052305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.052439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.052474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.052473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:22.432 [2024-10-13 20:07:12.052511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:22.432 [2024-10-13 20:07:12.052558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:22.432 [2024-10-13 20:07:12.052587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.052563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:22.432 [2024-10-13 20:07:12.052622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.052721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.052754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.052866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.052900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.053001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.053036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.053150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.053184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.053317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.053350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.053494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.053528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.053663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.053696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.053804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.053837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.053951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.053984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.054089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.054123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.054286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.054320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.054475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.054523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.054641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.054676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.054814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.054847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.054975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.055008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.055129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.055162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.055289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.055322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.055452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.055501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.055648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.055686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.055798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.055832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.055938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.055972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.056120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.056173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.056313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.056376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.056513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.056547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.056651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.056684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.056814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.056848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.056948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.056981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.057085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.432 [2024-10-13 20:07:12.057120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.432 qpair failed and we were unable to recover it. 00:37:22.432 [2024-10-13 20:07:12.057251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.057300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.057432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.057472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.057616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.057650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.057777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.057810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.057913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.057946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.058051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.058086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.058213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.058261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.058411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.058460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.058575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.058610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.058718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.058752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.058861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.058894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.058999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.059032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.059141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.059174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.059278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.059311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.059417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.059450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.059555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.059589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.059694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.059727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.059861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.059894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.059996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.060029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.060132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.060165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.060283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.060316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.060444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.060492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.060623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.060672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.060789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.060826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.060962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.060997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.061133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.061166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.061304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.061338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.061458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.061492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.061607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.061644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.061747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.061781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.061890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.061924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.062030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.062063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.062169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.062203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.062299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.062338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.062451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.062485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.062586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.433 [2024-10-13 20:07:12.062620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.433 qpair failed and we were unable to recover it. 00:37:22.433 [2024-10-13 20:07:12.062743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.062777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.062914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.062947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.063048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.063081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.063181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.063214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.063332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.063381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.063531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.063581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.063742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.063778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.063886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.063919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.064019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.064052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.064153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.064187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.064284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.064316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.064436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.064474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.064580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.064615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.064719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.064752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.064863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.064898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.064997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.065031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.065149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.065185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.065291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.065324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.065457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.065492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.065621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.065655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.065755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.065788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.065888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.065921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.066028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.066063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.066171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.066207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.066359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.066416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.066538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.066574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.066680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.066713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.066833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.066868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.067000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.067034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.067168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.067203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.067339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.067378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.067494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.067529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.067633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.067666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.067765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.067798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.067897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.067930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.068037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.068069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.068217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.068265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.068403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.068457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.068614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.068663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.434 qpair failed and we were unable to recover it. 00:37:22.434 [2024-10-13 20:07:12.068785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.434 [2024-10-13 20:07:12.068821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.068952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.068986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.069095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.069129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.069263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.069297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.069411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.069450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.069588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.069636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.069779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.069814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.069927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.069961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.070064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.070098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.070210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.070246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.070362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.070406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.070519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.070553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.070664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.070697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.070830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.070864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.070966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.071000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.071103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.071138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.071262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.071310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.071448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.071496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.071608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.071643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.071779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.071813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.071953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.071987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.072095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.072130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.072255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.072303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.072418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.072456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.072605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.072641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.072754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.072820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.072927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.072961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.073084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.073117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.073222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.073255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.073378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.073437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.073592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.073640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.073780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.073815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.073947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.073982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.435 [2024-10-13 20:07:12.074085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.435 [2024-10-13 20:07:12.074118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.435 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.074259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.074295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.074410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.074445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.074571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.074605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.074697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.074730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.074827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.074865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.074968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.075000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.075096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.075129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.075267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.075302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.075428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.075476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 A controller has encountered a failure and is being reset. 00:37:22.436 [2024-10-13 20:07:12.075622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.075670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.075774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.075808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.075917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.075952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.076058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.076091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.076200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.076233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.076366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.076406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.076524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.076560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.076665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.076698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.076819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.076857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.076963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.076996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.077096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.077129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.077265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.077303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:22.436 qpair failed and we were unable to recover it. 00:37:22.436 [2024-10-13 20:07:12.077501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.436 [2024-10-13 20:07:12.077544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:22.436 [2024-10-13 20:07:12.077572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:22.436 [2024-10-13 20:07:12.077612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:22.436 [2024-10-13 20:07:12.077643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:22.436 [2024-10-13 20:07:12.077669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:22.436 [2024-10-13 20:07:12.077695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:22.436 Unable to reset the controller. 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.004 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:23.262 Malloc0 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:23.262 [2024-10-13 20:07:12.886228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:23.262 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:23.263 [2024-10-13 20:07:12.916437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.263 20:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3160174 00:37:23.522 Controller properly reset. 00:37:28.841 Initializing NVMe Controllers 00:37:28.841 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:28.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:28.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:28.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:28.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:28.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:28.841 Initialization complete. Launching workers. 00:37:28.841 Starting thread on core 1 00:37:28.841 Starting thread on core 2 00:37:28.841 Starting thread on core 3 00:37:28.841 Starting thread on core 0 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:28.841 00:37:28.841 real 0m11.674s 00:37:28.841 user 0m37.027s 00:37:28.841 sys 0m7.481s 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:28.841 ************************************ 00:37:28.841 END TEST nvmf_target_disconnect_tc2 00:37:28.841 ************************************ 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:28.841 rmmod nvme_tcp 00:37:28.841 rmmod nvme_fabrics 00:37:28.841 rmmod nvme_keyring 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3160691 ']' 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3160691 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3160691 ']' 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3160691 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160691 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160691' 00:37:28.841 killing process with pid 3160691 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3160691 00:37:28.841 20:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3160691 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.776 20:07:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.683 20:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:31.683 00:37:31.683 real 0m17.669s 00:37:31.683 user 1m5.321s 00:37:31.683 sys 0m10.214s 00:37:31.683 20:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:31.683 20:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:31.683 ************************************ 00:37:31.683 END TEST nvmf_target_disconnect 00:37:31.683 ************************************ 00:37:31.683 20:07:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:31.683 00:37:31.683 real 7m39.998s 00:37:31.683 user 19m56.402s 00:37:31.683 sys 1m31.840s 00:37:31.683 20:07:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:31.683 20:07:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.683 ************************************ 00:37:31.683 END TEST nvmf_host 00:37:31.683 ************************************ 00:37:31.942 20:07:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:31.942 20:07:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:31.942 20:07:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:31.942 20:07:21 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:31.942 20:07:21 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:31.942 20:07:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:31.942 ************************************ 00:37:31.942 START TEST nvmf_target_core_interrupt_mode 00:37:31.942 ************************************ 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:31.942 * Looking for test storage... 00:37:31.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:31.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.942 --rc genhtml_branch_coverage=1 00:37:31.942 --rc genhtml_function_coverage=1 00:37:31.942 --rc genhtml_legend=1 00:37:31.942 --rc geninfo_all_blocks=1 00:37:31.942 --rc geninfo_unexecuted_blocks=1 00:37:31.942 00:37:31.942 ' 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:31.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.942 --rc genhtml_branch_coverage=1 00:37:31.942 --rc genhtml_function_coverage=1 00:37:31.942 --rc genhtml_legend=1 00:37:31.942 --rc geninfo_all_blocks=1 00:37:31.942 --rc geninfo_unexecuted_blocks=1 00:37:31.942 00:37:31.942 ' 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:31.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.942 --rc genhtml_branch_coverage=1 00:37:31.942 --rc genhtml_function_coverage=1 00:37:31.942 --rc genhtml_legend=1 00:37:31.942 --rc geninfo_all_blocks=1 00:37:31.942 --rc geninfo_unexecuted_blocks=1 00:37:31.942 00:37:31.942 ' 00:37:31.942 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:31.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.942 --rc genhtml_branch_coverage=1 00:37:31.942 --rc genhtml_function_coverage=1 00:37:31.942 --rc genhtml_legend=1 00:37:31.943 --rc geninfo_all_blocks=1 00:37:31.943 --rc geninfo_unexecuted_blocks=1 00:37:31.943 00:37:31.943 ' 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:31.943 ************************************ 00:37:31.943 START TEST nvmf_abort 00:37:31.943 ************************************ 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:31.943 * Looking for test storage... 00:37:31.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:37:31.943 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:32.202 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:32.202 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:32.202 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:32.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.203 --rc genhtml_branch_coverage=1 00:37:32.203 --rc genhtml_function_coverage=1 00:37:32.203 --rc genhtml_legend=1 00:37:32.203 --rc geninfo_all_blocks=1 00:37:32.203 --rc geninfo_unexecuted_blocks=1 00:37:32.203 00:37:32.203 ' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:32.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.203 --rc genhtml_branch_coverage=1 00:37:32.203 --rc genhtml_function_coverage=1 00:37:32.203 --rc genhtml_legend=1 00:37:32.203 --rc geninfo_all_blocks=1 00:37:32.203 --rc geninfo_unexecuted_blocks=1 00:37:32.203 00:37:32.203 ' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:32.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.203 --rc genhtml_branch_coverage=1 00:37:32.203 --rc genhtml_function_coverage=1 00:37:32.203 --rc genhtml_legend=1 00:37:32.203 --rc geninfo_all_blocks=1 00:37:32.203 --rc geninfo_unexecuted_blocks=1 00:37:32.203 00:37:32.203 ' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:32.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.203 --rc genhtml_branch_coverage=1 00:37:32.203 --rc genhtml_function_coverage=1 00:37:32.203 --rc genhtml_legend=1 00:37:32.203 --rc geninfo_all_blocks=1 00:37:32.203 --rc geninfo_unexecuted_blocks=1 00:37:32.203 00:37:32.203 ' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:32.203 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:32.204 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:32.204 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:32.204 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:32.204 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:32.204 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:32.204 20:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.103 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:34.104 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:34.104 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:34.104 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:34.104 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:34.104 20:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:34.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:34.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:37:34.363 00:37:34.363 --- 10.0.0.2 ping statistics --- 00:37:34.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.363 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:34.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:34.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:37:34.363 00:37:34.363 --- 10.0.0.1 ping statistics --- 00:37:34.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.363 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3163629 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3163629 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3163629 ']' 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:34.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:34.363 20:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.623 [2024-10-13 20:07:24.197966] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:34.623 [2024-10-13 20:07:24.200541] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:37:34.623 [2024-10-13 20:07:24.200644] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:34.623 [2024-10-13 20:07:24.334481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:34.883 [2024-10-13 20:07:24.473233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:34.883 [2024-10-13 20:07:24.473304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:34.883 [2024-10-13 20:07:24.473333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:34.883 [2024-10-13 20:07:24.473354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:34.883 [2024-10-13 20:07:24.473376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:34.883 [2024-10-13 20:07:24.476159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:34.883 [2024-10-13 20:07:24.476244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.883 [2024-10-13 20:07:24.476253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:35.143 [2024-10-13 20:07:24.852404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:35.143 [2024-10-13 20:07:24.853509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:35.143 [2024-10-13 20:07:24.854290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:35.143 [2024-10-13 20:07:24.854627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.403 [2024-10-13 20:07:25.189410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.403 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.664 Malloc0 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.664 Delay0 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.664 [2024-10-13 20:07:25.317578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.664 20:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:35.664 [2024-10-13 20:07:25.428166] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:38.203 Initializing NVMe Controllers 00:37:38.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:38.203 controller IO queue size 128 less than required 00:37:38.203 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:38.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:38.203 Initialization complete. Launching workers. 00:37:38.203 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 21071 00:37:38.203 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 21128, failed to submit 66 00:37:38.203 success 21071, unsuccessful 57, failed 0 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:38.203 rmmod nvme_tcp 00:37:38.203 rmmod nvme_fabrics 00:37:38.203 rmmod nvme_keyring 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3163629 ']' 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3163629 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3163629 ']' 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3163629 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3163629 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3163629' 00:37:38.203 killing process with pid 3163629 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3163629 00:37:38.203 20:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3163629 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:39.578 20:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:41.489 00:37:41.489 real 0m9.431s 00:37:41.489 user 0m11.790s 00:37:41.489 sys 0m3.189s 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:41.489 ************************************ 00:37:41.489 END TEST nvmf_abort 00:37:41.489 ************************************ 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:41.489 ************************************ 00:37:41.489 START TEST nvmf_ns_hotplug_stress 00:37:41.489 ************************************ 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:41.489 * Looking for test storage... 00:37:41.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:37:41.489 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:41.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.748 --rc genhtml_branch_coverage=1 00:37:41.748 --rc genhtml_function_coverage=1 00:37:41.748 --rc genhtml_legend=1 00:37:41.748 --rc geninfo_all_blocks=1 00:37:41.748 --rc geninfo_unexecuted_blocks=1 00:37:41.748 00:37:41.748 ' 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:41.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.748 --rc genhtml_branch_coverage=1 00:37:41.748 --rc genhtml_function_coverage=1 00:37:41.748 --rc genhtml_legend=1 00:37:41.748 --rc geninfo_all_blocks=1 00:37:41.748 --rc geninfo_unexecuted_blocks=1 00:37:41.748 00:37:41.748 ' 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:41.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.748 --rc genhtml_branch_coverage=1 00:37:41.748 --rc genhtml_function_coverage=1 00:37:41.748 --rc genhtml_legend=1 00:37:41.748 --rc geninfo_all_blocks=1 00:37:41.748 --rc geninfo_unexecuted_blocks=1 00:37:41.748 00:37:41.748 ' 00:37:41.748 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:41.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.748 --rc genhtml_branch_coverage=1 00:37:41.748 --rc genhtml_function_coverage=1 00:37:41.748 --rc genhtml_legend=1 00:37:41.748 --rc geninfo_all_blocks=1 00:37:41.748 --rc geninfo_unexecuted_blocks=1 00:37:41.748 00:37:41.748 ' 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:41.749 20:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:43.656 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:43.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:43.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.656 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:43.657 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:43.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:37:43.657 00:37:43.657 --- 10.0.0.2 ping statistics --- 00:37:43.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.657 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:37:43.657 00:37:43.657 --- 10.0.0.1 ping statistics --- 00:37:43.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.657 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3166106 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3166106 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3166106 ']' 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:43.657 20:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:43.915 [2024-10-13 20:07:33.527964] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:43.915 [2024-10-13 20:07:33.530570] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:37:43.915 [2024-10-13 20:07:33.530692] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:43.915 [2024-10-13 20:07:33.677290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:44.173 [2024-10-13 20:07:33.814278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:44.173 [2024-10-13 20:07:33.814351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:44.173 [2024-10-13 20:07:33.814390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:44.173 [2024-10-13 20:07:33.814439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:44.173 [2024-10-13 20:07:33.814465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:44.173 [2024-10-13 20:07:33.817102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:44.173 [2024-10-13 20:07:33.817201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.173 [2024-10-13 20:07:33.817209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:44.433 [2024-10-13 20:07:34.182481] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:44.433 [2024-10-13 20:07:34.183554] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:44.433 [2024-10-13 20:07:34.184360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:44.433 [2024-10-13 20:07:34.184702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:44.693 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:44.693 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:37:44.693 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:44.693 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:44.693 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:44.693 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:44.693 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:44.693 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:44.953 [2024-10-13 20:07:34.754309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.214 20:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:45.473 20:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:45.731 [2024-10-13 20:07:35.330834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.731 20:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:45.989 20:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:46.247 Malloc0 00:37:46.247 20:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:46.505 Delay0 00:37:46.505 20:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:46.764 20:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:47.022 NULL1 00:37:47.282 20:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:47.540 20:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3166536 00:37:47.540 20:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:47.540 20:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.540 20:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:48.914 Read completed with error (sct=0, sc=11) 00:37:48.914 20:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:48.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:48.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:48.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:48.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:48.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:48.914 20:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:48.914 20:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:49.172 true 00:37:49.172 20:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:49.172 20:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.107 20:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:50.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:50.365 20:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:50.365 20:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:50.623 true 00:37:50.623 20:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:50.623 20:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.881 20:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:51.139 20:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:51.139 20:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:51.396 true 00:37:51.397 20:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:51.397 20:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:51.655 20:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:51.913 20:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:51.913 20:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:52.171 true 00:37:52.171 20:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:52.171 20:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.105 20:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:53.105 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:53.364 20:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:53.364 20:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:53.622 true 00:37:53.622 20:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:53.622 20:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.880 20:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:54.138 20:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:54.138 20:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:54.397 true 00:37:54.397 20:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:54.397 20:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:54.656 20:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:54.914 20:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:54.914 20:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:55.172 true 00:37:55.172 20:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:55.172 20:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:56.110 20:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:56.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:56.626 20:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:56.626 20:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:56.884 true 00:37:56.884 20:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:56.884 20:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:57.143 20:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:57.401 20:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:57.401 20:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:57.659 true 00:37:57.659 20:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:57.659 20:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:57.918 20:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:58.176 20:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:58.176 20:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:58.433 true 00:37:58.433 20:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:58.433 20:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:59.367 20:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:59.625 20:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:59.625 20:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:59.884 true 00:37:59.884 20:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:37:59.884 20:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:00.142 20:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:00.400 20:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:00.400 20:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:00.658 true 00:38:00.658 20:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:00.658 20:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:00.941 20:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:01.224 20:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:01.224 20:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:01.482 true 00:38:01.482 20:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:01.482 20:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:02.422 20:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:02.681 20:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:02.681 20:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:02.939 true 00:38:02.939 20:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:02.939 20:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.198 20:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:03.456 20:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:03.456 20:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:03.715 true 00:38:03.715 20:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:03.715 20:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.974 20:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:04.231 20:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:04.231 20:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:04.489 true 00:38:04.489 20:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:04.489 20:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:05.425 20:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.683 20:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:05.683 20:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:05.941 true 00:38:05.941 20:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:05.941 20:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:06.509 20:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:06.509 20:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:06.509 20:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:06.767 true 00:38:06.767 20:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:06.767 20:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:07.025 20:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:07.283 20:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:07.283 20:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:07.542 true 00:38:07.802 20:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:07.802 20:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:08.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.745 20:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.003 20:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:09.003 20:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:09.261 true 00:38:09.261 20:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:09.261 20:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.519 20:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.778 20:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:09.778 20:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:10.035 true 00:38:10.035 20:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:10.035 20:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.293 20:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.551 20:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:10.551 20:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:10.809 true 00:38:10.809 20:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:10.809 20:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.746 20:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:12.005 20:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:12.005 20:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:12.263 true 00:38:12.263 20:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:12.263 20:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.829 20:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.829 20:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:12.829 20:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:13.087 true 00:38:13.087 20:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:13.087 20:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.345 20:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.604 20:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:13.604 20:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:13.862 true 00:38:14.120 20:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:14.120 20:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.054 20:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:15.312 20:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:15.312 20:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:15.570 true 00:38:15.570 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:15.570 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.828 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.087 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:16.087 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:16.345 true 00:38:16.345 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:16.345 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.603 20:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.861 20:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:16.861 20:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:17.120 true 00:38:17.120 20:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:17.120 20:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.058 Initializing NVMe Controllers 00:38:18.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:18.058 Controller IO queue size 128, less than required. 00:38:18.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:18.058 Controller IO queue size 128, less than required. 00:38:18.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:18.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:18.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:18.058 Initialization complete. Launching workers. 00:38:18.058 ======================================================== 00:38:18.058 Latency(us) 00:38:18.058 Device Information : IOPS MiB/s Average min max 00:38:18.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 456.40 0.22 114600.20 4126.81 1016571.79 00:38:18.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6497.27 3.17 19699.70 1917.94 391259.21 00:38:18.058 ======================================================== 00:38:18.058 Total : 6953.67 3.40 25928.44 1917.94 1016571.79 00:38:18.058 00:38:18.058 20:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.315 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:18.315 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:18.573 true 00:38:18.573 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3166536 00:38:18.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3166536) - No such process 00:38:18.573 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3166536 00:38:18.573 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.831 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:19.400 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:19.400 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:19.400 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:19.400 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:19.400 20:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:19.400 null0 00:38:19.400 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:19.400 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:19.400 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:19.660 null1 00:38:19.660 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:19.660 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:19.660 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:19.920 null2 00:38:19.920 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:19.920 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:19.920 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:20.180 null3 00:38:20.180 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:20.180 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:20.180 20:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:20.746 null4 00:38:20.746 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:20.746 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:20.746 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:20.746 null5 00:38:21.004 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:21.004 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:21.004 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:21.262 null6 00:38:21.262 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:21.262 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:21.262 20:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:21.521 null7 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:21.521 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3170538 3170539 3170541 3170543 3170545 3170547 3170549 3170551 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.522 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:21.780 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:21.780 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:21.780 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:21.780 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:21.780 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.780 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:21.780 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:21.780 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.038 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.039 20:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:22.327 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:22.327 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:22.327 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.327 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:22.327 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:22.327 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:22.327 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:22.327 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.607 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:22.865 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.865 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:22.865 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:22.865 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:22.865 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:22.865 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:22.865 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:22.865 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:23.123 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:23.381 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.381 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.381 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.381 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:23.381 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.382 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:23.640 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:23.640 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.640 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:23.640 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:23.640 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:23.640 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:23.640 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:23.640 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.899 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:24.157 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:24.157 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.157 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:24.157 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:24.157 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:24.157 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:24.157 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:24.157 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.416 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:24.674 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:24.674 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:24.674 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:24.674 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.674 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:24.674 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:24.675 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:24.675 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:24.933 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.191 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:25.449 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:25.449 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.449 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:25.449 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:25.449 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:25.449 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:25.449 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:25.449 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:25.707 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:25.965 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.965 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:25.965 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:25.965 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:25.965 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:25.965 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:25.966 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:25.966 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.224 20:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:26.482 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:26.482 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.482 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:26.482 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:26.482 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:26.482 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:26.482 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:26.482 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:26.740 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:26.999 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.999 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.999 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:26.999 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:26.999 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:26.999 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:27.257 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.257 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:27.257 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:27.257 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:27.257 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:27.257 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:27.257 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:27.257 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.515 rmmod nvme_tcp 00:38:27.515 rmmod nvme_fabrics 00:38:27.515 rmmod nvme_keyring 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3166106 ']' 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3166106 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3166106 ']' 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3166106 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3166106 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:27.515 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3166106' 00:38:27.515 killing process with pid 3166106 00:38:27.516 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3166106 00:38:27.516 20:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3166106 00:38:28.889 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.890 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.792 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:30.792 00:38:30.792 real 0m49.377s 00:38:30.792 user 3m19.954s 00:38:30.792 sys 0m22.810s 00:38:30.792 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:30.792 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:30.792 ************************************ 00:38:30.792 END TEST nvmf_ns_hotplug_stress 00:38:30.792 ************************************ 00:38:30.792 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:30.792 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:30.792 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:30.792 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:31.051 ************************************ 00:38:31.051 START TEST nvmf_delete_subsystem 00:38:31.051 ************************************ 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:31.051 * Looking for test storage... 00:38:31.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:31.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.051 --rc genhtml_branch_coverage=1 00:38:31.051 --rc genhtml_function_coverage=1 00:38:31.051 --rc genhtml_legend=1 00:38:31.051 --rc geninfo_all_blocks=1 00:38:31.051 --rc geninfo_unexecuted_blocks=1 00:38:31.051 00:38:31.051 ' 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:31.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.051 --rc genhtml_branch_coverage=1 00:38:31.051 --rc genhtml_function_coverage=1 00:38:31.051 --rc genhtml_legend=1 00:38:31.051 --rc geninfo_all_blocks=1 00:38:31.051 --rc geninfo_unexecuted_blocks=1 00:38:31.051 00:38:31.051 ' 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:31.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.051 --rc genhtml_branch_coverage=1 00:38:31.051 --rc genhtml_function_coverage=1 00:38:31.051 --rc genhtml_legend=1 00:38:31.051 --rc geninfo_all_blocks=1 00:38:31.051 --rc geninfo_unexecuted_blocks=1 00:38:31.051 00:38:31.051 ' 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:31.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.051 --rc genhtml_branch_coverage=1 00:38:31.051 --rc genhtml_function_coverage=1 00:38:31.051 --rc genhtml_legend=1 00:38:31.051 --rc geninfo_all_blocks=1 00:38:31.051 --rc geninfo_unexecuted_blocks=1 00:38:31.051 00:38:31.051 ' 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:31.051 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:31.052 20:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:33.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:33.587 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:33.587 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:33.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:33.587 20:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:33.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:33.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:38:33.587 00:38:33.587 --- 10.0.0.2 ping statistics --- 00:38:33.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.587 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:33.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:33.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:38:33.587 00:38:33.587 --- 10.0.0.1 ping statistics --- 00:38:33.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.587 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3173551 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3173551 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3173551 ']' 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:33.587 20:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:33.587 [2024-10-13 20:08:23.201854] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:33.587 [2024-10-13 20:08:23.204438] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:38:33.587 [2024-10-13 20:08:23.204545] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.587 [2024-10-13 20:08:23.347939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:33.848 [2024-10-13 20:08:23.489253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.848 [2024-10-13 20:08:23.489332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.848 [2024-10-13 20:08:23.489362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.848 [2024-10-13 20:08:23.489383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.848 [2024-10-13 20:08:23.489419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.848 [2024-10-13 20:08:23.492017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.848 [2024-10-13 20:08:23.492024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.109 [2024-10-13 20:08:23.866795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:34.109 [2024-10-13 20:08:23.867556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:34.109 [2024-10-13 20:08:23.867902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:34.368 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:34.368 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:38:34.368 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:34.368 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:34.369 [2024-10-13 20:08:24.173103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.369 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:34.630 [2024-10-13 20:08:24.193471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:34.630 NULL1 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:34.630 Delay0 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3173703 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:34.630 20:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:34.630 [2024-10-13 20:08:24.315579] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:36.535 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:36.535 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.535 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 starting I/O failed: -6 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.796 starting I/O failed: -6 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Read completed with error (sct=0, sc=8) 00:38:36.796 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 starting I/O failed: -6 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 starting I/O failed: -6 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 [2024-10-13 20:08:26.419688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Read completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:36.797 Write completed with error (sct=0, sc=8) 00:38:37.733 [2024-10-13 20:08:27.385047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 [2024-10-13 20:08:27.421269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 [2024-10-13 20:08:27.422440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Write completed with error (sct=0, sc=8) 00:38:37.733 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 [2024-10-13 20:08:27.423309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Read completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 Write completed with error (sct=0, sc=8) 00:38:37.734 [2024-10-13 20:08:27.424543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:38:37.734 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.734 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:37.734 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3173703 00:38:37.734 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:37.734 Initializing NVMe Controllers 00:38:37.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:37.734 Controller IO queue size 128, less than required. 00:38:37.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:37.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:37.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:37.734 Initialization complete. Launching workers. 00:38:37.734 ======================================================== 00:38:37.734 Latency(us) 00:38:37.734 Device Information : IOPS MiB/s Average min max 00:38:37.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.88 0.08 900186.23 974.42 1015734.99 00:38:37.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.87 0.08 895376.36 799.50 1017153.95 00:38:37.734 ======================================================== 00:38:37.734 Total : 342.75 0.17 897774.34 799.50 1017153.95 00:38:37.734 00:38:37.734 [2024-10-13 20:08:27.429569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:38:37.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:38.378 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:38.378 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3173703 00:38:38.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3173703) - No such process 00:38:38.378 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3173703 00:38:38.378 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:38:38.378 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3173703 00:38:38.378 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:38:38.378 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3173703 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:38.379 [2024-10-13 20:08:27.945423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.379 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.380 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.380 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.380 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:38.380 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.380 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3174107 00:38:38.380 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:38.380 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:38.380 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3174107 00:38:38.381 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:38.381 [2024-10-13 20:08:28.045541] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:38.951 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:38.951 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3174107 00:38:38.951 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:39.209 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:39.209 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3174107 00:38:39.209 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:39.775 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:39.775 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3174107 00:38:39.775 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:40.342 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:40.342 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3174107 00:38:40.342 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:40.909 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:40.909 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3174107 00:38:40.909 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:41.168 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:41.168 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3174107 00:38:41.168 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:41.428 Initializing NVMe Controllers 00:38:41.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:41.428 Controller IO queue size 128, less than required. 00:38:41.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:41.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:41.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:41.428 Initialization complete. Launching workers. 00:38:41.428 ======================================================== 00:38:41.428 Latency(us) 00:38:41.428 Device Information : IOPS MiB/s Average min max 00:38:41.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1007500.93 1000250.85 1045252.35 00:38:41.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006686.96 1000356.27 1045294.01 00:38:41.428 ======================================================== 00:38:41.428 Total : 256.00 0.12 1007093.95 1000250.85 1045294.01 00:38:41.428 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3174107 00:38:41.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3174107) - No such process 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3174107 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.687 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.687 rmmod nvme_tcp 00:38:41.945 rmmod nvme_fabrics 00:38:41.945 rmmod nvme_keyring 00:38:41.945 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.945 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:41.945 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:41.945 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3173551 ']' 00:38:41.945 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3173551 00:38:41.945 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3173551 ']' 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3173551 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3173551 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3173551' 00:38:41.946 killing process with pid 3173551 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3173551 00:38:41.946 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3173551 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:42.885 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.424 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.424 00:38:45.424 real 0m14.021s 00:38:45.424 user 0m26.089s 00:38:45.424 sys 0m3.995s 00:38:45.424 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:45.424 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:45.424 ************************************ 00:38:45.424 END TEST nvmf_delete_subsystem 00:38:45.424 ************************************ 00:38:45.424 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:45.424 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:45.424 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:45.425 ************************************ 00:38:45.425 START TEST nvmf_host_management 00:38:45.425 ************************************ 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:45.425 * Looking for test storage... 00:38:45.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.425 --rc genhtml_branch_coverage=1 00:38:45.425 --rc genhtml_function_coverage=1 00:38:45.425 --rc genhtml_legend=1 00:38:45.425 --rc geninfo_all_blocks=1 00:38:45.425 --rc geninfo_unexecuted_blocks=1 00:38:45.425 00:38:45.425 ' 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.425 --rc genhtml_branch_coverage=1 00:38:45.425 --rc genhtml_function_coverage=1 00:38:45.425 --rc genhtml_legend=1 00:38:45.425 --rc geninfo_all_blocks=1 00:38:45.425 --rc geninfo_unexecuted_blocks=1 00:38:45.425 00:38:45.425 ' 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.425 --rc genhtml_branch_coverage=1 00:38:45.425 --rc genhtml_function_coverage=1 00:38:45.425 --rc genhtml_legend=1 00:38:45.425 --rc geninfo_all_blocks=1 00:38:45.425 --rc geninfo_unexecuted_blocks=1 00:38:45.425 00:38:45.425 ' 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.425 --rc genhtml_branch_coverage=1 00:38:45.425 --rc genhtml_function_coverage=1 00:38:45.425 --rc genhtml_legend=1 00:38:45.425 --rc geninfo_all_blocks=1 00:38:45.425 --rc geninfo_unexecuted_blocks=1 00:38:45.425 00:38:45.425 ' 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.425 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:45.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:47.331 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:47.332 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:47.332 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:47.332 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:47.332 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:47.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:47.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:38:47.332 00:38:47.332 --- 10.0.0.2 ping statistics --- 00:38:47.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.332 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:47.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:47.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:38:47.332 00:38:47.332 --- 10.0.0.1 ping statistics --- 00:38:47.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.332 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3176570 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3176570 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3176570 ']' 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.332 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:47.333 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.333 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:47.333 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:47.333 [2024-10-13 20:08:36.930731] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:47.333 [2024-10-13 20:08:36.933245] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:38:47.333 [2024-10-13 20:08:36.933352] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.333 [2024-10-13 20:08:37.065799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:47.592 [2024-10-13 20:08:37.200978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.592 [2024-10-13 20:08:37.201063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.592 [2024-10-13 20:08:37.201093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.592 [2024-10-13 20:08:37.201114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.592 [2024-10-13 20:08:37.201136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.592 [2024-10-13 20:08:37.204049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:47.592 [2024-10-13 20:08:37.204162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:47.592 [2024-10-13 20:08:37.204204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.592 [2024-10-13 20:08:37.204229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:47.851 [2024-10-13 20:08:37.580012] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:47.851 [2024-10-13 20:08:37.589757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:47.851 [2024-10-13 20:08:37.590010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:47.851 [2024-10-13 20:08:37.590898] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:47.851 [2024-10-13 20:08:37.591249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:48.111 [2024-10-13 20:08:37.913384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:48.111 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:48.370 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:48.370 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:48.370 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:48.370 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.370 20:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:48.370 Malloc0 00:38:48.370 [2024-10-13 20:08:38.041652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3176740 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3176740 /var/tmp/bdevperf.sock 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3176740 ']' 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:48.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:48.370 { 00:38:48.370 "params": { 00:38:48.370 "name": "Nvme$subsystem", 00:38:48.370 "trtype": "$TEST_TRANSPORT", 00:38:48.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.370 "adrfam": "ipv4", 00:38:48.370 "trsvcid": "$NVMF_PORT", 00:38:48.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.370 "hdgst": ${hdgst:-false}, 00:38:48.370 "ddgst": ${ddgst:-false} 00:38:48.370 }, 00:38:48.370 "method": "bdev_nvme_attach_controller" 00:38:48.370 } 00:38:48.370 EOF 00:38:48.370 )") 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:38:48.370 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:48.370 "params": { 00:38:48.370 "name": "Nvme0", 00:38:48.370 "trtype": "tcp", 00:38:48.370 "traddr": "10.0.0.2", 00:38:48.370 "adrfam": "ipv4", 00:38:48.370 "trsvcid": "4420", 00:38:48.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:48.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:48.370 "hdgst": false, 00:38:48.370 "ddgst": false 00:38:48.370 }, 00:38:48.370 "method": "bdev_nvme_attach_controller" 00:38:48.370 }' 00:38:48.370 [2024-10-13 20:08:38.162602] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:38:48.370 [2024-10-13 20:08:38.162771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3176740 ] 00:38:48.629 [2024-10-13 20:08:38.291047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.629 [2024-10-13 20:08:38.421906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.195 Running I/O for 10 seconds... 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=384 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 384 -ge 100 ']' 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:49.456 [2024-10-13 20:08:39.183088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:49.456 [2024-10-13 20:08:39.183172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.183200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:49.456 [2024-10-13 20:08:39.183222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.183252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:49.456 [2024-10-13 20:08:39.183273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.183295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:49.456 [2024-10-13 20:08:39.183314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.183334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:49.456 [2024-10-13 20:08:39.193297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.456 [2024-10-13 20:08:39.193438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.456 [2024-10-13 20:08:39.193480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.193521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.456 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:49.456 [2024-10-13 20:08:39.193546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.193590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.456 [2024-10-13 20:08:39.193614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.193638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.456 [2024-10-13 20:08:39.193659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.193694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.456 [2024-10-13 20:08:39.193716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.193739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.456 [2024-10-13 20:08:39.193769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.193794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.456 [2024-10-13 20:08:39.193816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.456 [2024-10-13 20:08:39.193847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.193870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.193894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.193917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.193951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.193973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.193997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.194970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.194991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.457 [2024-10-13 20:08:39.195867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.457 [2024-10-13 20:08:39.195892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.195914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.195939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.195962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.195986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.458 [2024-10-13 20:08:39.196578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.458 [2024-10-13 20:08:39.196875] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:38:49.458 [2024-10-13 20:08:39.198091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:49.458 task offset: 57344 on job bdev=Nvme0n1 fails 00:38:49.458 00:38:49.458 Latency(us) 00:38:49.458 [2024-10-13T18:08:39.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:49.458 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:49.458 Job: Nvme0n1 ended in about 0.35 seconds with error 00:38:49.458 Verification LBA range: start 0x0 length 0x400 00:38:49.458 Nvme0n1 : 0.35 1284.65 80.29 183.52 0.00 42105.48 3980.71 41943.04 00:38:49.458 [2024-10-13T18:08:39.273Z] =================================================================================================================== 00:38:49.458 [2024-10-13T18:08:39.273Z] Total : 1284.65 80.29 183.52 0.00 42105.48 3980.71 41943.04 00:38:49.458 [2024-10-13 20:08:39.202867] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:49.458 [2024-10-13 20:08:39.208323] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3176740 00:38:50.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3176740) - No such process 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:50.395 { 00:38:50.395 "params": { 00:38:50.395 "name": "Nvme$subsystem", 00:38:50.395 "trtype": "$TEST_TRANSPORT", 00:38:50.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:50.395 "adrfam": "ipv4", 00:38:50.395 "trsvcid": "$NVMF_PORT", 00:38:50.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:50.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:50.395 "hdgst": ${hdgst:-false}, 00:38:50.395 "ddgst": ${ddgst:-false} 00:38:50.395 }, 00:38:50.395 "method": "bdev_nvme_attach_controller" 00:38:50.395 } 00:38:50.395 EOF 00:38:50.395 )") 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:38:50.395 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:50.395 "params": { 00:38:50.395 "name": "Nvme0", 00:38:50.395 "trtype": "tcp", 00:38:50.395 "traddr": "10.0.0.2", 00:38:50.395 "adrfam": "ipv4", 00:38:50.395 "trsvcid": "4420", 00:38:50.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:50.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:50.395 "hdgst": false, 00:38:50.395 "ddgst": false 00:38:50.395 }, 00:38:50.395 "method": "bdev_nvme_attach_controller" 00:38:50.395 }' 00:38:50.655 [2024-10-13 20:08:40.282188] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:38:50.655 [2024-10-13 20:08:40.282329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3177018 ] 00:38:50.655 [2024-10-13 20:08:40.412470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.914 [2024-10-13 20:08:40.543881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.480 Running I/O for 1 seconds... 00:38:52.416 1344.00 IOPS, 84.00 MiB/s 00:38:52.416 Latency(us) 00:38:52.416 [2024-10-13T18:08:42.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:52.416 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:52.416 Verification LBA range: start 0x0 length 0x400 00:38:52.416 Nvme0n1 : 1.03 1370.89 85.68 0.00 0.00 45902.39 8301.23 40001.23 00:38:52.416 [2024-10-13T18:08:42.231Z] =================================================================================================================== 00:38:52.416 [2024-10-13T18:08:42.231Z] Total : 1370.89 85.68 0.00 0.00 45902.39 8301.23 40001.23 00:38:53.353 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:53.353 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:53.353 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:53.353 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:53.353 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:53.353 rmmod nvme_tcp 00:38:53.353 rmmod nvme_fabrics 00:38:53.353 rmmod nvme_keyring 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3176570 ']' 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3176570 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3176570 ']' 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3176570 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3176570 00:38:53.353 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:53.354 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:53.354 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3176570' 00:38:53.354 killing process with pid 3176570 00:38:53.354 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3176570 00:38:53.354 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3176570 00:38:54.729 [2024-10-13 20:08:44.344470] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.729 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:57.267 00:38:57.267 real 0m11.809s 00:38:57.267 user 0m26.124s 00:38:57.267 sys 0m4.469s 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:57.267 ************************************ 00:38:57.267 END TEST nvmf_host_management 00:38:57.267 ************************************ 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:57.267 ************************************ 00:38:57.267 START TEST nvmf_lvol 00:38:57.267 ************************************ 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:57.267 * Looking for test storage... 00:38:57.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:57.267 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:57.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:57.268 --rc genhtml_branch_coverage=1 00:38:57.268 --rc genhtml_function_coverage=1 00:38:57.268 --rc genhtml_legend=1 00:38:57.268 --rc geninfo_all_blocks=1 00:38:57.268 --rc geninfo_unexecuted_blocks=1 00:38:57.268 00:38:57.268 ' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:57.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:57.268 --rc genhtml_branch_coverage=1 00:38:57.268 --rc genhtml_function_coverage=1 00:38:57.268 --rc genhtml_legend=1 00:38:57.268 --rc geninfo_all_blocks=1 00:38:57.268 --rc geninfo_unexecuted_blocks=1 00:38:57.268 00:38:57.268 ' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:57.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:57.268 --rc genhtml_branch_coverage=1 00:38:57.268 --rc genhtml_function_coverage=1 00:38:57.268 --rc genhtml_legend=1 00:38:57.268 --rc geninfo_all_blocks=1 00:38:57.268 --rc geninfo_unexecuted_blocks=1 00:38:57.268 00:38:57.268 ' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:57.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:57.268 --rc genhtml_branch_coverage=1 00:38:57.268 --rc genhtml_function_coverage=1 00:38:57.268 --rc genhtml_legend=1 00:38:57.268 --rc geninfo_all_blocks=1 00:38:57.268 --rc geninfo_unexecuted_blocks=1 00:38:57.268 00:38:57.268 ' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:57.268 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:57.269 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:59.174 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:59.174 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:59.174 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.174 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:59.175 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:59.175 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:59.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:38:59.436 00:38:59.436 --- 10.0.0.2 ping statistics --- 00:38:59.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.436 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:38:59.436 00:38:59.436 --- 10.0.0.1 ping statistics --- 00:38:59.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.436 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3179478 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3179478 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3179478 ']' 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:59.436 20:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:59.436 [2024-10-13 20:08:49.139835] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:59.436 [2024-10-13 20:08:49.142491] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:38:59.436 [2024-10-13 20:08:49.142592] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.696 [2024-10-13 20:08:49.280500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:59.696 [2024-10-13 20:08:49.415253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.696 [2024-10-13 20:08:49.415327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.696 [2024-10-13 20:08:49.415356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.696 [2024-10-13 20:08:49.415377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.696 [2024-10-13 20:08:49.415414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.696 [2024-10-13 20:08:49.418120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.696 [2024-10-13 20:08:49.418182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.696 [2024-10-13 20:08:49.418192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:00.266 [2024-10-13 20:08:49.792351] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:00.266 [2024-10-13 20:08:49.793022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:00.266 [2024-10-13 20:08:49.793357] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:00.266 [2024-10-13 20:08:49.794770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:00.525 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:00.525 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:39:00.525 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:00.525 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:00.525 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:00.525 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:00.525 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:00.783 [2024-10-13 20:08:50.403286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:00.783 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:01.043 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:01.043 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:01.614 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:01.614 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:01.614 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:02.182 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=71aa6849-43a4-4c06-bc4b-bab9cf665a06 00:39:02.182 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 71aa6849-43a4-4c06-bc4b-bab9cf665a06 lvol 20 00:39:02.182 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3dc5ec53-efa0-4e85-8666-138d699c2db0 00:39:02.182 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:02.441 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3dc5ec53-efa0-4e85-8666-138d699c2db0 00:39:03.008 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:03.008 [2024-10-13 20:08:52.783493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.008 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:03.267 20:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3179913 00:39:03.267 20:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:03.267 20:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:04.644 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3dc5ec53-efa0-4e85-8666-138d699c2db0 MY_SNAPSHOT 00:39:04.644 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1342a2ad-cfa8-4de8-8715-7bd6210707f2 00:39:04.644 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3dc5ec53-efa0-4e85-8666-138d699c2db0 30 00:39:04.902 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1342a2ad-cfa8-4de8-8715-7bd6210707f2 MY_CLONE 00:39:05.472 20:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ac9b8af0-d9b6-490a-9e9e-cdc036b8d00d 00:39:05.472 20:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ac9b8af0-d9b6-490a-9e9e-cdc036b8d00d 00:39:06.040 20:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3179913 00:39:14.173 Initializing NVMe Controllers 00:39:14.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:14.173 Controller IO queue size 128, less than required. 00:39:14.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:14.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:14.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:14.173 Initialization complete. Launching workers. 00:39:14.173 ======================================================== 00:39:14.173 Latency(us) 00:39:14.173 Device Information : IOPS MiB/s Average min max 00:39:14.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8415.11 32.87 15215.78 336.07 128704.80 00:39:14.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8319.91 32.50 15388.04 4731.08 188697.49 00:39:14.173 ======================================================== 00:39:14.173 Total : 16735.02 65.37 15301.42 336.07 188697.49 00:39:14.173 00:39:14.173 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:14.173 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3dc5ec53-efa0-4e85-8666-138d699c2db0 00:39:14.432 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71aa6849-43a4-4c06-bc4b-bab9cf665a06 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:14.690 rmmod nvme_tcp 00:39:14.690 rmmod nvme_fabrics 00:39:14.690 rmmod nvme_keyring 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3179478 ']' 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3179478 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3179478 ']' 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3179478 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3179478 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3179478' 00:39:14.690 killing process with pid 3179478 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3179478 00:39:14.690 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3179478 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.592 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:18.499 00:39:18.499 real 0m21.493s 00:39:18.499 user 0m59.083s 00:39:18.499 sys 0m7.405s 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:18.499 ************************************ 00:39:18.499 END TEST nvmf_lvol 00:39:18.499 ************************************ 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:18.499 ************************************ 00:39:18.499 START TEST nvmf_lvs_grow 00:39:18.499 ************************************ 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:18.499 * Looking for test storage... 00:39:18.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:18.499 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:18.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.499 --rc genhtml_branch_coverage=1 00:39:18.500 --rc genhtml_function_coverage=1 00:39:18.500 --rc genhtml_legend=1 00:39:18.500 --rc geninfo_all_blocks=1 00:39:18.500 --rc geninfo_unexecuted_blocks=1 00:39:18.500 00:39:18.500 ' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.500 --rc genhtml_branch_coverage=1 00:39:18.500 --rc genhtml_function_coverage=1 00:39:18.500 --rc genhtml_legend=1 00:39:18.500 --rc geninfo_all_blocks=1 00:39:18.500 --rc geninfo_unexecuted_blocks=1 00:39:18.500 00:39:18.500 ' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.500 --rc genhtml_branch_coverage=1 00:39:18.500 --rc genhtml_function_coverage=1 00:39:18.500 --rc genhtml_legend=1 00:39:18.500 --rc geninfo_all_blocks=1 00:39:18.500 --rc geninfo_unexecuted_blocks=1 00:39:18.500 00:39:18.500 ' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.500 --rc genhtml_branch_coverage=1 00:39:18.500 --rc genhtml_function_coverage=1 00:39:18.500 --rc genhtml_legend=1 00:39:18.500 --rc geninfo_all_blocks=1 00:39:18.500 --rc geninfo_unexecuted_blocks=1 00:39:18.500 00:39:18.500 ' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:18.500 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:20.405 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:20.405 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:20.405 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:20.405 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:20.405 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:20.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:39:20.406 00:39:20.406 --- 10.0.0.2 ping statistics --- 00:39:20.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.406 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:20.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:39:20.406 00:39:20.406 --- 10.0.0.1 ping statistics --- 00:39:20.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.406 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:20.406 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3183911 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3183911 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3183911 ']' 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:20.665 20:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:20.665 [2024-10-13 20:09:10.332747] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:20.665 [2024-10-13 20:09:10.335468] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:39:20.665 [2024-10-13 20:09:10.335562] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.925 [2024-10-13 20:09:10.485092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.925 [2024-10-13 20:09:10.623741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.925 [2024-10-13 20:09:10.623837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.925 [2024-10-13 20:09:10.623867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.925 [2024-10-13 20:09:10.623890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.925 [2024-10-13 20:09:10.623912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.925 [2024-10-13 20:09:10.625569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.184 [2024-10-13 20:09:10.997439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:21.184 [2024-10-13 20:09:10.997903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:21.751 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:21.751 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:39:21.751 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:21.751 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:21.751 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:21.751 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:21.751 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:22.009 [2024-10-13 20:09:11.586727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:22.009 ************************************ 00:39:22.009 START TEST lvs_grow_clean 00:39:22.009 ************************************ 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:22.009 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:22.267 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:22.267 20:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:22.527 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:22.527 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:22.527 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:22.786 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:22.786 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:22.786 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 lvol 150 00:39:23.046 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d6cf0c80-c0ae-4085-90b0-4fc50f0b65da 00:39:23.046 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:23.046 20:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:23.307 [2024-10-13 20:09:12.994590] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:23.307 [2024-10-13 20:09:12.994758] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:23.307 true 00:39:23.307 20:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:23.307 20:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:23.566 20:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:23.566 20:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:23.826 20:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d6cf0c80-c0ae-4085-90b0-4fc50f0b65da 00:39:24.086 20:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:24.346 [2024-10-13 20:09:14.107098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.347 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3184467 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3184467 /var/tmp/bdevperf.sock 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3184467 ']' 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:24.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:24.606 20:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:24.864 [2024-10-13 20:09:14.484529] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:39:24.864 [2024-10-13 20:09:14.484687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184467 ] 00:39:24.864 [2024-10-13 20:09:14.613904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.124 [2024-10-13 20:09:14.750420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:26.065 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:26.065 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:39:26.065 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:26.324 Nvme0n1 00:39:26.324 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:26.583 [ 00:39:26.583 { 00:39:26.583 "name": "Nvme0n1", 00:39:26.583 "aliases": [ 00:39:26.583 "d6cf0c80-c0ae-4085-90b0-4fc50f0b65da" 00:39:26.583 ], 00:39:26.583 "product_name": "NVMe disk", 00:39:26.583 "block_size": 4096, 00:39:26.583 "num_blocks": 38912, 00:39:26.583 "uuid": "d6cf0c80-c0ae-4085-90b0-4fc50f0b65da", 00:39:26.583 "numa_id": 0, 00:39:26.583 "assigned_rate_limits": { 00:39:26.583 "rw_ios_per_sec": 0, 00:39:26.583 "rw_mbytes_per_sec": 0, 00:39:26.583 "r_mbytes_per_sec": 0, 00:39:26.583 "w_mbytes_per_sec": 0 00:39:26.583 }, 00:39:26.583 "claimed": false, 00:39:26.583 "zoned": false, 00:39:26.583 "supported_io_types": { 00:39:26.583 "read": true, 00:39:26.583 "write": true, 00:39:26.583 "unmap": true, 00:39:26.583 "flush": true, 00:39:26.583 "reset": true, 00:39:26.583 "nvme_admin": true, 00:39:26.583 "nvme_io": true, 00:39:26.583 "nvme_io_md": false, 00:39:26.583 "write_zeroes": true, 00:39:26.583 "zcopy": false, 00:39:26.583 "get_zone_info": false, 00:39:26.583 "zone_management": false, 00:39:26.583 "zone_append": false, 00:39:26.583 "compare": true, 00:39:26.583 "compare_and_write": true, 00:39:26.583 "abort": true, 00:39:26.583 "seek_hole": false, 00:39:26.583 "seek_data": false, 00:39:26.583 "copy": true, 00:39:26.583 "nvme_iov_md": false 00:39:26.583 }, 00:39:26.583 "memory_domains": [ 00:39:26.583 { 00:39:26.583 "dma_device_id": "system", 00:39:26.583 "dma_device_type": 1 00:39:26.583 } 00:39:26.583 ], 00:39:26.583 "driver_specific": { 00:39:26.583 "nvme": [ 00:39:26.583 { 00:39:26.583 "trid": { 00:39:26.583 "trtype": "TCP", 00:39:26.583 "adrfam": "IPv4", 00:39:26.583 "traddr": "10.0.0.2", 00:39:26.583 "trsvcid": "4420", 00:39:26.583 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:26.583 }, 00:39:26.583 "ctrlr_data": { 00:39:26.583 "cntlid": 1, 00:39:26.583 "vendor_id": "0x8086", 00:39:26.583 "model_number": "SPDK bdev Controller", 00:39:26.583 "serial_number": "SPDK0", 00:39:26.583 "firmware_revision": "25.01", 00:39:26.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:26.583 "oacs": { 00:39:26.583 "security": 0, 00:39:26.583 "format": 0, 00:39:26.583 "firmware": 0, 00:39:26.583 "ns_manage": 0 00:39:26.583 }, 00:39:26.583 "multi_ctrlr": true, 00:39:26.583 "ana_reporting": false 00:39:26.583 }, 00:39:26.583 "vs": { 00:39:26.583 "nvme_version": "1.3" 00:39:26.583 }, 00:39:26.583 "ns_data": { 00:39:26.583 "id": 1, 00:39:26.583 "can_share": true 00:39:26.583 } 00:39:26.583 } 00:39:26.583 ], 00:39:26.583 "mp_policy": "active_passive" 00:39:26.583 } 00:39:26.583 } 00:39:26.583 ] 00:39:26.583 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3184733 00:39:26.583 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:26.583 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:26.843 Running I/O for 10 seconds... 00:39:27.784 Latency(us) 00:39:27.784 [2024-10-13T18:09:17.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:27.784 Nvme0n1 : 1.00 11060.00 43.20 0.00 0.00 0.00 0.00 0.00 00:39:27.784 [2024-10-13T18:09:17.599Z] =================================================================================================================== 00:39:27.784 [2024-10-13T18:09:17.599Z] Total : 11060.00 43.20 0.00 0.00 0.00 0.00 0.00 00:39:27.784 00:39:28.725 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:28.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:28.725 Nvme0n1 : 2.00 10930.50 42.70 0.00 0.00 0.00 0.00 0.00 00:39:28.725 [2024-10-13T18:09:18.540Z] =================================================================================================================== 00:39:28.725 [2024-10-13T18:09:18.540Z] Total : 10930.50 42.70 0.00 0.00 0.00 0.00 0.00 00:39:28.725 00:39:28.985 true 00:39:28.985 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:28.985 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:29.243 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:29.243 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:29.243 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3184733 00:39:29.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:29.810 Nvme0n1 : 3.00 10891.67 42.55 0.00 0.00 0.00 0.00 0.00 00:39:29.810 [2024-10-13T18:09:19.625Z] =================================================================================================================== 00:39:29.810 [2024-10-13T18:09:19.625Z] Total : 10891.67 42.55 0.00 0.00 0.00 0.00 0.00 00:39:29.810 00:39:30.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:30.780 Nvme0n1 : 4.00 10886.00 42.52 0.00 0.00 0.00 0.00 0.00 00:39:30.780 [2024-10-13T18:09:20.595Z] =================================================================================================================== 00:39:30.780 [2024-10-13T18:09:20.595Z] Total : 10886.00 42.52 0.00 0.00 0.00 0.00 0.00 00:39:30.780 00:39:31.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:31.761 Nvme0n1 : 5.00 10910.40 42.62 0.00 0.00 0.00 0.00 0.00 00:39:31.761 [2024-10-13T18:09:21.576Z] =================================================================================================================== 00:39:31.761 [2024-10-13T18:09:21.576Z] Total : 10910.40 42.62 0.00 0.00 0.00 0.00 0.00 00:39:31.761 00:39:32.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:32.700 Nvme0n1 : 6.00 10927.67 42.69 0.00 0.00 0.00 0.00 0.00 00:39:32.700 [2024-10-13T18:09:22.515Z] =================================================================================================================== 00:39:32.700 [2024-10-13T18:09:22.515Z] Total : 10927.67 42.69 0.00 0.00 0.00 0.00 0.00 00:39:32.700 00:39:34.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:34.081 Nvme0n1 : 7.00 10928.86 42.69 0.00 0.00 0.00 0.00 0.00 00:39:34.081 [2024-10-13T18:09:23.896Z] =================================================================================================================== 00:39:34.081 [2024-10-13T18:09:23.896Z] Total : 10928.86 42.69 0.00 0.00 0.00 0.00 0.00 00:39:34.081 00:39:35.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:35.018 Nvme0n1 : 8.00 10914.88 42.64 0.00 0.00 0.00 0.00 0.00 00:39:35.018 [2024-10-13T18:09:24.833Z] =================================================================================================================== 00:39:35.018 [2024-10-13T18:09:24.833Z] Total : 10914.88 42.64 0.00 0.00 0.00 0.00 0.00 00:39:35.018 00:39:35.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:35.956 Nvme0n1 : 9.00 10946.22 42.76 0.00 0.00 0.00 0.00 0.00 00:39:35.956 [2024-10-13T18:09:25.771Z] =================================================================================================================== 00:39:35.956 [2024-10-13T18:09:25.771Z] Total : 10946.22 42.76 0.00 0.00 0.00 0.00 0.00 00:39:35.956 00:39:36.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:36.890 Nvme0n1 : 10.00 10960.50 42.81 0.00 0.00 0.00 0.00 0.00 00:39:36.890 [2024-10-13T18:09:26.705Z] =================================================================================================================== 00:39:36.890 [2024-10-13T18:09:26.705Z] Total : 10960.50 42.81 0.00 0.00 0.00 0.00 0.00 00:39:36.890 00:39:36.890 00:39:36.890 Latency(us) 00:39:36.890 [2024-10-13T18:09:26.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:36.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:36.890 Nvme0n1 : 10.01 10961.59 42.82 0.00 0.00 11669.95 6796.33 24466.77 00:39:36.890 [2024-10-13T18:09:26.705Z] =================================================================================================================== 00:39:36.890 [2024-10-13T18:09:26.705Z] Total : 10961.59 42.82 0.00 0.00 11669.95 6796.33 24466.77 00:39:36.890 { 00:39:36.890 "results": [ 00:39:36.890 { 00:39:36.890 "job": "Nvme0n1", 00:39:36.890 "core_mask": "0x2", 00:39:36.890 "workload": "randwrite", 00:39:36.890 "status": "finished", 00:39:36.890 "queue_depth": 128, 00:39:36.890 "io_size": 4096, 00:39:36.890 "runtime": 10.007582, 00:39:36.890 "iops": 10961.58892327837, 00:39:36.890 "mibps": 42.818706731556134, 00:39:36.890 "io_failed": 0, 00:39:36.890 "io_timeout": 0, 00:39:36.890 "avg_latency_us": 11669.945179688662, 00:39:36.890 "min_latency_us": 6796.325925925926, 00:39:36.890 "max_latency_us": 24466.773333333334 00:39:36.890 } 00:39:36.890 ], 00:39:36.890 "core_count": 1 00:39:36.890 } 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3184467 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3184467 ']' 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3184467 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3184467 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3184467' 00:39:36.890 killing process with pid 3184467 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3184467 00:39:36.890 Received shutdown signal, test time was about 10.000000 seconds 00:39:36.890 00:39:36.890 Latency(us) 00:39:36.890 [2024-10-13T18:09:26.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:36.890 [2024-10-13T18:09:26.705Z] =================================================================================================================== 00:39:36.890 [2024-10-13T18:09:26.705Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:36.890 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3184467 00:39:37.831 20:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:38.090 20:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:38.352 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:38.352 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:38.610 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:38.610 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:38.610 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:38.868 [2024-10-13 20:09:28.530571] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:38.868 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:39.126 request: 00:39:39.126 { 00:39:39.126 "uuid": "2a7ad4fa-ef41-4699-aebd-bccf20632d00", 00:39:39.126 "method": "bdev_lvol_get_lvstores", 00:39:39.126 "req_id": 1 00:39:39.126 } 00:39:39.126 Got JSON-RPC error response 00:39:39.126 response: 00:39:39.126 { 00:39:39.126 "code": -19, 00:39:39.126 "message": "No such device" 00:39:39.126 } 00:39:39.126 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:39:39.126 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:39.126 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:39.126 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:39.126 20:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:39.385 aio_bdev 00:39:39.385 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d6cf0c80-c0ae-4085-90b0-4fc50f0b65da 00:39:39.385 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d6cf0c80-c0ae-4085-90b0-4fc50f0b65da 00:39:39.385 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:39.385 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:39:39.385 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:39.385 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:39.385 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:39.645 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d6cf0c80-c0ae-4085-90b0-4fc50f0b65da -t 2000 00:39:39.905 [ 00:39:39.905 { 00:39:39.905 "name": "d6cf0c80-c0ae-4085-90b0-4fc50f0b65da", 00:39:39.905 "aliases": [ 00:39:39.905 "lvs/lvol" 00:39:39.905 ], 00:39:39.905 "product_name": "Logical Volume", 00:39:39.905 "block_size": 4096, 00:39:39.905 "num_blocks": 38912, 00:39:39.905 "uuid": "d6cf0c80-c0ae-4085-90b0-4fc50f0b65da", 00:39:39.905 "assigned_rate_limits": { 00:39:39.905 "rw_ios_per_sec": 0, 00:39:39.905 "rw_mbytes_per_sec": 0, 00:39:39.905 "r_mbytes_per_sec": 0, 00:39:39.905 "w_mbytes_per_sec": 0 00:39:39.905 }, 00:39:39.905 "claimed": false, 00:39:39.905 "zoned": false, 00:39:39.905 "supported_io_types": { 00:39:39.905 "read": true, 00:39:39.905 "write": true, 00:39:39.905 "unmap": true, 00:39:39.905 "flush": false, 00:39:39.905 "reset": true, 00:39:39.905 "nvme_admin": false, 00:39:39.905 "nvme_io": false, 00:39:39.905 "nvme_io_md": false, 00:39:39.905 "write_zeroes": true, 00:39:39.905 "zcopy": false, 00:39:39.905 "get_zone_info": false, 00:39:39.905 "zone_management": false, 00:39:39.905 "zone_append": false, 00:39:39.905 "compare": false, 00:39:39.905 "compare_and_write": false, 00:39:39.905 "abort": false, 00:39:39.905 "seek_hole": true, 00:39:39.905 "seek_data": true, 00:39:39.905 "copy": false, 00:39:39.905 "nvme_iov_md": false 00:39:39.905 }, 00:39:39.905 "driver_specific": { 00:39:39.905 "lvol": { 00:39:39.905 "lvol_store_uuid": "2a7ad4fa-ef41-4699-aebd-bccf20632d00", 00:39:39.905 "base_bdev": "aio_bdev", 00:39:39.905 "thin_provision": false, 00:39:39.905 "num_allocated_clusters": 38, 00:39:39.905 "snapshot": false, 00:39:39.905 "clone": false, 00:39:39.905 "esnap_clone": false 00:39:39.905 } 00:39:39.905 } 00:39:39.905 } 00:39:39.905 ] 00:39:39.905 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:39:39.905 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:39.905 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:40.164 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:40.164 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:40.164 20:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:40.423 20:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:40.423 20:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6cf0c80-c0ae-4085-90b0-4fc50f0b65da 00:39:40.989 20:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a7ad4fa-ef41-4699-aebd-bccf20632d00 00:39:40.989 20:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:41.556 00:39:41.556 real 0m19.472s 00:39:41.556 user 0m19.445s 00:39:41.556 sys 0m1.853s 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:41.556 ************************************ 00:39:41.556 END TEST lvs_grow_clean 00:39:41.556 ************************************ 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:41.556 ************************************ 00:39:41.556 START TEST lvs_grow_dirty 00:39:41.556 ************************************ 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:41.556 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:41.816 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:41.816 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:42.075 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cd20841c-99ac-4a84-b7f8-4779819d9d11 00:39:42.075 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:39:42.075 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:42.334 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:42.334 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:42.334 20:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cd20841c-99ac-4a84-b7f8-4779819d9d11 lvol 150 00:39:42.593 20:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=52580c8c-0603-4365-81b5-690ee972846d 00:39:42.593 20:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:42.593 20:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:42.854 [2024-10-13 20:09:32.530530] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:42.854 [2024-10-13 20:09:32.530695] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:42.854 true 00:39:42.854 20:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:39:42.854 20:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:43.113 20:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:43.113 20:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:43.373 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52580c8c-0603-4365-81b5-690ee972846d 00:39:43.632 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:43.890 [2024-10-13 20:09:33.662970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:43.890 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3186761 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3186761 /var/tmp/bdevperf.sock 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3186761 ']' 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:44.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:44.149 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:44.408 [2024-10-13 20:09:34.029308] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:39:44.408 [2024-10-13 20:09:34.029488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186761 ] 00:39:44.408 [2024-10-13 20:09:34.160860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.668 [2024-10-13 20:09:34.290093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:45.237 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:45.237 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:39:45.237 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:45.803 Nvme0n1 00:39:45.803 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:46.062 [ 00:39:46.062 { 00:39:46.062 "name": "Nvme0n1", 00:39:46.062 "aliases": [ 00:39:46.062 "52580c8c-0603-4365-81b5-690ee972846d" 00:39:46.062 ], 00:39:46.062 "product_name": "NVMe disk", 00:39:46.062 "block_size": 4096, 00:39:46.062 "num_blocks": 38912, 00:39:46.062 "uuid": "52580c8c-0603-4365-81b5-690ee972846d", 00:39:46.062 "numa_id": 0, 00:39:46.062 "assigned_rate_limits": { 00:39:46.062 "rw_ios_per_sec": 0, 00:39:46.062 "rw_mbytes_per_sec": 0, 00:39:46.062 "r_mbytes_per_sec": 0, 00:39:46.062 "w_mbytes_per_sec": 0 00:39:46.062 }, 00:39:46.062 "claimed": false, 00:39:46.062 "zoned": false, 00:39:46.062 "supported_io_types": { 00:39:46.062 "read": true, 00:39:46.062 "write": true, 00:39:46.062 "unmap": true, 00:39:46.062 "flush": true, 00:39:46.062 "reset": true, 00:39:46.062 "nvme_admin": true, 00:39:46.062 "nvme_io": true, 00:39:46.062 "nvme_io_md": false, 00:39:46.062 "write_zeroes": true, 00:39:46.062 "zcopy": false, 00:39:46.062 "get_zone_info": false, 00:39:46.062 "zone_management": false, 00:39:46.062 "zone_append": false, 00:39:46.062 "compare": true, 00:39:46.062 "compare_and_write": true, 00:39:46.062 "abort": true, 00:39:46.062 "seek_hole": false, 00:39:46.062 "seek_data": false, 00:39:46.062 "copy": true, 00:39:46.062 "nvme_iov_md": false 00:39:46.062 }, 00:39:46.062 "memory_domains": [ 00:39:46.062 { 00:39:46.062 "dma_device_id": "system", 00:39:46.062 "dma_device_type": 1 00:39:46.062 } 00:39:46.062 ], 00:39:46.062 "driver_specific": { 00:39:46.062 "nvme": [ 00:39:46.062 { 00:39:46.062 "trid": { 00:39:46.062 "trtype": "TCP", 00:39:46.062 "adrfam": "IPv4", 00:39:46.062 "traddr": "10.0.0.2", 00:39:46.062 "trsvcid": "4420", 00:39:46.062 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:46.062 }, 00:39:46.062 "ctrlr_data": { 00:39:46.062 "cntlid": 1, 00:39:46.062 "vendor_id": "0x8086", 00:39:46.062 "model_number": "SPDK bdev Controller", 00:39:46.062 "serial_number": "SPDK0", 00:39:46.062 "firmware_revision": "25.01", 00:39:46.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:46.062 "oacs": { 00:39:46.062 "security": 0, 00:39:46.062 "format": 0, 00:39:46.062 "firmware": 0, 00:39:46.062 "ns_manage": 0 00:39:46.062 }, 00:39:46.062 "multi_ctrlr": true, 00:39:46.062 "ana_reporting": false 00:39:46.062 }, 00:39:46.062 "vs": { 00:39:46.062 "nvme_version": "1.3" 00:39:46.062 }, 00:39:46.062 "ns_data": { 00:39:46.062 "id": 1, 00:39:46.062 "can_share": true 00:39:46.062 } 00:39:46.062 } 00:39:46.062 ], 00:39:46.062 "mp_policy": "active_passive" 00:39:46.062 } 00:39:46.062 } 00:39:46.062 ] 00:39:46.062 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3186919 00:39:46.062 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:46.062 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:46.062 Running I/O for 10 seconds... 00:39:47.442 Latency(us) 00:39:47.442 [2024-10-13T18:09:37.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:47.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:47.442 Nvme0n1 : 1.00 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:39:47.442 [2024-10-13T18:09:37.257Z] =================================================================================================================== 00:39:47.442 [2024-10-13T18:09:37.257Z] Total : 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:39:47.442 00:39:48.008 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:39:48.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:48.268 Nvme0n1 : 2.00 10891.00 42.54 0.00 0.00 0.00 0.00 0.00 00:39:48.268 [2024-10-13T18:09:38.083Z] =================================================================================================================== 00:39:48.268 [2024-10-13T18:09:38.083Z] Total : 10891.00 42.54 0.00 0.00 0.00 0.00 0.00 00:39:48.268 00:39:48.268 true 00:39:48.268 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:39:48.268 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:48.834 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:48.834 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:48.834 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3186919 00:39:49.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.092 Nvme0n1 : 3.00 10887.67 42.53 0.00 0.00 0.00 0.00 0.00 00:39:49.092 [2024-10-13T18:09:38.907Z] =================================================================================================================== 00:39:49.092 [2024-10-13T18:09:38.907Z] Total : 10887.67 42.53 0.00 0.00 0.00 0.00 0.00 00:39:49.092 00:39:50.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.466 Nvme0n1 : 4.00 10912.50 42.63 0.00 0.00 0.00 0.00 0.00 00:39:50.466 [2024-10-13T18:09:40.281Z] =================================================================================================================== 00:39:50.466 [2024-10-13T18:09:40.281Z] Total : 10912.50 42.63 0.00 0.00 0.00 0.00 0.00 00:39:50.466 00:39:51.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:51.400 Nvme0n1 : 5.00 10882.00 42.51 0.00 0.00 0.00 0.00 0.00 00:39:51.400 [2024-10-13T18:09:41.215Z] =================================================================================================================== 00:39:51.400 [2024-10-13T18:09:41.215Z] Total : 10882.00 42.51 0.00 0.00 0.00 0.00 0.00 00:39:51.400 00:39:52.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.334 Nvme0n1 : 6.00 10893.67 42.55 0.00 0.00 0.00 0.00 0.00 00:39:52.334 [2024-10-13T18:09:42.149Z] =================================================================================================================== 00:39:52.334 [2024-10-13T18:09:42.149Z] Total : 10893.67 42.55 0.00 0.00 0.00 0.00 0.00 00:39:52.334 00:39:53.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.269 Nvme0n1 : 7.00 10924.57 42.67 0.00 0.00 0.00 0.00 0.00 00:39:53.269 [2024-10-13T18:09:43.084Z] =================================================================================================================== 00:39:53.269 [2024-10-13T18:09:43.084Z] Total : 10924.57 42.67 0.00 0.00 0.00 0.00 0.00 00:39:53.269 00:39:54.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:54.202 Nvme0n1 : 8.00 10947.50 42.76 0.00 0.00 0.00 0.00 0.00 00:39:54.202 [2024-10-13T18:09:44.017Z] =================================================================================================================== 00:39:54.202 [2024-10-13T18:09:44.017Z] Total : 10947.50 42.76 0.00 0.00 0.00 0.00 0.00 00:39:54.202 00:39:55.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.135 Nvme0n1 : 9.00 10859.44 42.42 0.00 0.00 0.00 0.00 0.00 00:39:55.135 [2024-10-13T18:09:44.950Z] =================================================================================================================== 00:39:55.135 [2024-10-13T18:09:44.950Z] Total : 10859.44 42.42 0.00 0.00 0.00 0.00 0.00 00:39:55.135 00:39:56.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.083 Nvme0n1 : 10.00 10781.50 42.12 0.00 0.00 0.00 0.00 0.00 00:39:56.083 [2024-10-13T18:09:45.898Z] =================================================================================================================== 00:39:56.083 [2024-10-13T18:09:45.898Z] Total : 10781.50 42.12 0.00 0.00 0.00 0.00 0.00 00:39:56.083 00:39:56.341 00:39:56.341 Latency(us) 00:39:56.341 [2024-10-13T18:09:46.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.341 Nvme0n1 : 10.01 10777.47 42.10 0.00 0.00 11863.82 4150.61 25826.04 00:39:56.341 [2024-10-13T18:09:46.156Z] =================================================================================================================== 00:39:56.341 [2024-10-13T18:09:46.156Z] Total : 10777.47 42.10 0.00 0.00 11863.82 4150.61 25826.04 00:39:56.341 { 00:39:56.341 "results": [ 00:39:56.341 { 00:39:56.341 "job": "Nvme0n1", 00:39:56.341 "core_mask": "0x2", 00:39:56.341 "workload": "randwrite", 00:39:56.341 "status": "finished", 00:39:56.341 "queue_depth": 128, 00:39:56.341 "io_size": 4096, 00:39:56.341 "runtime": 10.01265, 00:39:56.341 "iops": 10777.466504871338, 00:39:56.341 "mibps": 42.09947853465366, 00:39:56.341 "io_failed": 0, 00:39:56.341 "io_timeout": 0, 00:39:56.341 "avg_latency_us": 11863.816082512441, 00:39:56.341 "min_latency_us": 4150.613333333334, 00:39:56.341 "max_latency_us": 25826.03851851852 00:39:56.341 } 00:39:56.341 ], 00:39:56.341 "core_count": 1 00:39:56.341 } 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3186761 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3186761 ']' 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3186761 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3186761 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3186761' 00:39:56.341 killing process with pid 3186761 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3186761 00:39:56.341 Received shutdown signal, test time was about 10.000000 seconds 00:39:56.341 00:39:56.341 Latency(us) 00:39:56.341 [2024-10-13T18:09:46.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.341 [2024-10-13T18:09:46.156Z] =================================================================================================================== 00:39:56.341 [2024-10-13T18:09:46.156Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:56.341 20:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3186761 00:39:57.348 20:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:57.348 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:57.914 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:39:57.914 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:57.914 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:57.914 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:57.914 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3183911 00:39:57.914 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3183911 00:39:58.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3183911 Killed "${NVMF_APP[@]}" "$@" 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3188353 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3188353 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3188353 ']' 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:58.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:58.172 20:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:58.172 [2024-10-13 20:09:47.848268] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:58.172 [2024-10-13 20:09:47.850869] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:39:58.173 [2024-10-13 20:09:47.850984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:58.430 [2024-10-13 20:09:47.991623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.430 [2024-10-13 20:09:48.109494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:58.430 [2024-10-13 20:09:48.109573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:58.430 [2024-10-13 20:09:48.109614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:58.430 [2024-10-13 20:09:48.109631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:58.430 [2024-10-13 20:09:48.109650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:58.430 [2024-10-13 20:09:48.111084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:58.688 [2024-10-13 20:09:48.462064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:58.688 [2024-10-13 20:09:48.462504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:59.255 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:59.255 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:39:59.255 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:59.255 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:59.255 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:59.255 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:59.255 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:59.514 [2024-10-13 20:09:49.163493] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:59.514 [2024-10-13 20:09:49.163728] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:59.514 [2024-10-13 20:09:49.163817] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:59.514 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:59.514 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 52580c8c-0603-4365-81b5-690ee972846d 00:39:59.514 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=52580c8c-0603-4365-81b5-690ee972846d 00:39:59.514 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:59.514 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:39:59.514 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:59.514 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:59.514 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:59.772 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52580c8c-0603-4365-81b5-690ee972846d -t 2000 00:40:00.030 [ 00:40:00.030 { 00:40:00.030 "name": "52580c8c-0603-4365-81b5-690ee972846d", 00:40:00.030 "aliases": [ 00:40:00.030 "lvs/lvol" 00:40:00.030 ], 00:40:00.030 "product_name": "Logical Volume", 00:40:00.030 "block_size": 4096, 00:40:00.030 "num_blocks": 38912, 00:40:00.030 "uuid": "52580c8c-0603-4365-81b5-690ee972846d", 00:40:00.030 "assigned_rate_limits": { 00:40:00.030 "rw_ios_per_sec": 0, 00:40:00.030 "rw_mbytes_per_sec": 0, 00:40:00.030 "r_mbytes_per_sec": 0, 00:40:00.030 "w_mbytes_per_sec": 0 00:40:00.030 }, 00:40:00.030 "claimed": false, 00:40:00.030 "zoned": false, 00:40:00.030 "supported_io_types": { 00:40:00.030 "read": true, 00:40:00.030 "write": true, 00:40:00.030 "unmap": true, 00:40:00.030 "flush": false, 00:40:00.030 "reset": true, 00:40:00.030 "nvme_admin": false, 00:40:00.030 "nvme_io": false, 00:40:00.030 "nvme_io_md": false, 00:40:00.030 "write_zeroes": true, 00:40:00.030 "zcopy": false, 00:40:00.030 "get_zone_info": false, 00:40:00.030 "zone_management": false, 00:40:00.030 "zone_append": false, 00:40:00.030 "compare": false, 00:40:00.030 "compare_and_write": false, 00:40:00.030 "abort": false, 00:40:00.030 "seek_hole": true, 00:40:00.030 "seek_data": true, 00:40:00.030 "copy": false, 00:40:00.030 "nvme_iov_md": false 00:40:00.030 }, 00:40:00.030 "driver_specific": { 00:40:00.030 "lvol": { 00:40:00.030 "lvol_store_uuid": "cd20841c-99ac-4a84-b7f8-4779819d9d11", 00:40:00.030 "base_bdev": "aio_bdev", 00:40:00.030 "thin_provision": false, 00:40:00.030 "num_allocated_clusters": 38, 00:40:00.030 "snapshot": false, 00:40:00.030 "clone": false, 00:40:00.030 "esnap_clone": false 00:40:00.030 } 00:40:00.030 } 00:40:00.030 } 00:40:00.030 ] 00:40:00.030 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:00.030 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:40:00.030 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:00.289 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:00.289 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:40:00.289 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:00.547 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:00.547 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:00.805 [2024-10-13 20:09:50.556095] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:00.805 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:40:01.064 request: 00:40:01.064 { 00:40:01.064 "uuid": "cd20841c-99ac-4a84-b7f8-4779819d9d11", 00:40:01.064 "method": "bdev_lvol_get_lvstores", 00:40:01.064 "req_id": 1 00:40:01.064 } 00:40:01.064 Got JSON-RPC error response 00:40:01.064 response: 00:40:01.064 { 00:40:01.064 "code": -19, 00:40:01.064 "message": "No such device" 00:40:01.064 } 00:40:01.064 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:40:01.064 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:01.064 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:01.064 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:01.064 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:01.630 aio_bdev 00:40:01.630 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 52580c8c-0603-4365-81b5-690ee972846d 00:40:01.630 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=52580c8c-0603-4365-81b5-690ee972846d 00:40:01.630 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:01.630 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:01.630 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:01.630 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:01.630 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:01.889 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52580c8c-0603-4365-81b5-690ee972846d -t 2000 00:40:02.147 [ 00:40:02.147 { 00:40:02.147 "name": "52580c8c-0603-4365-81b5-690ee972846d", 00:40:02.147 "aliases": [ 00:40:02.147 "lvs/lvol" 00:40:02.147 ], 00:40:02.147 "product_name": "Logical Volume", 00:40:02.147 "block_size": 4096, 00:40:02.147 "num_blocks": 38912, 00:40:02.147 "uuid": "52580c8c-0603-4365-81b5-690ee972846d", 00:40:02.147 "assigned_rate_limits": { 00:40:02.147 "rw_ios_per_sec": 0, 00:40:02.147 "rw_mbytes_per_sec": 0, 00:40:02.147 "r_mbytes_per_sec": 0, 00:40:02.147 "w_mbytes_per_sec": 0 00:40:02.147 }, 00:40:02.147 "claimed": false, 00:40:02.147 "zoned": false, 00:40:02.147 "supported_io_types": { 00:40:02.147 "read": true, 00:40:02.147 "write": true, 00:40:02.147 "unmap": true, 00:40:02.147 "flush": false, 00:40:02.147 "reset": true, 00:40:02.147 "nvme_admin": false, 00:40:02.147 "nvme_io": false, 00:40:02.147 "nvme_io_md": false, 00:40:02.147 "write_zeroes": true, 00:40:02.147 "zcopy": false, 00:40:02.147 "get_zone_info": false, 00:40:02.147 "zone_management": false, 00:40:02.147 "zone_append": false, 00:40:02.147 "compare": false, 00:40:02.147 "compare_and_write": false, 00:40:02.147 "abort": false, 00:40:02.147 "seek_hole": true, 00:40:02.147 "seek_data": true, 00:40:02.147 "copy": false, 00:40:02.147 "nvme_iov_md": false 00:40:02.147 }, 00:40:02.147 "driver_specific": { 00:40:02.147 "lvol": { 00:40:02.147 "lvol_store_uuid": "cd20841c-99ac-4a84-b7f8-4779819d9d11", 00:40:02.147 "base_bdev": "aio_bdev", 00:40:02.147 "thin_provision": false, 00:40:02.147 "num_allocated_clusters": 38, 00:40:02.147 "snapshot": false, 00:40:02.147 "clone": false, 00:40:02.147 "esnap_clone": false 00:40:02.147 } 00:40:02.147 } 00:40:02.147 } 00:40:02.147 ] 00:40:02.147 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:02.147 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:40:02.147 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:02.405 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:02.405 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:40:02.405 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:02.663 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:02.663 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52580c8c-0603-4365-81b5-690ee972846d 00:40:02.921 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd20841c-99ac-4a84-b7f8-4779819d9d11 00:40:03.179 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:03.438 00:40:03.438 real 0m21.991s 00:40:03.438 user 0m38.554s 00:40:03.438 sys 0m5.049s 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:03.438 ************************************ 00:40:03.438 END TEST lvs_grow_dirty 00:40:03.438 ************************************ 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:03.438 nvmf_trace.0 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:03.438 rmmod nvme_tcp 00:40:03.438 rmmod nvme_fabrics 00:40:03.438 rmmod nvme_keyring 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3188353 ']' 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3188353 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3188353 ']' 00:40:03.438 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3188353 00:40:03.697 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:40:03.697 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:03.697 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3188353 00:40:03.697 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:03.697 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:03.697 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3188353' 00:40:03.697 killing process with pid 3188353 00:40:03.697 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3188353 00:40:03.697 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3188353 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.631 20:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:07.164 00:40:07.164 real 0m48.355s 00:40:07.164 user 1m1.093s 00:40:07.164 sys 0m8.938s 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:07.164 ************************************ 00:40:07.164 END TEST nvmf_lvs_grow 00:40:07.164 ************************************ 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:07.164 ************************************ 00:40:07.164 START TEST nvmf_bdev_io_wait 00:40:07.164 ************************************ 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:07.164 * Looking for test storage... 00:40:07.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:07.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.164 --rc genhtml_branch_coverage=1 00:40:07.164 --rc genhtml_function_coverage=1 00:40:07.164 --rc genhtml_legend=1 00:40:07.164 --rc geninfo_all_blocks=1 00:40:07.164 --rc geninfo_unexecuted_blocks=1 00:40:07.164 00:40:07.164 ' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:07.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.164 --rc genhtml_branch_coverage=1 00:40:07.164 --rc genhtml_function_coverage=1 00:40:07.164 --rc genhtml_legend=1 00:40:07.164 --rc geninfo_all_blocks=1 00:40:07.164 --rc geninfo_unexecuted_blocks=1 00:40:07.164 00:40:07.164 ' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:07.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.164 --rc genhtml_branch_coverage=1 00:40:07.164 --rc genhtml_function_coverage=1 00:40:07.164 --rc genhtml_legend=1 00:40:07.164 --rc geninfo_all_blocks=1 00:40:07.164 --rc geninfo_unexecuted_blocks=1 00:40:07.164 00:40:07.164 ' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:07.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.164 --rc genhtml_branch_coverage=1 00:40:07.164 --rc genhtml_function_coverage=1 00:40:07.164 --rc genhtml_legend=1 00:40:07.164 --rc geninfo_all_blocks=1 00:40:07.164 --rc geninfo_unexecuted_blocks=1 00:40:07.164 00:40:07.164 ' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.164 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:07.165 20:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:09.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:09.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:09.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:09.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:09.065 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:09.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:09.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:40:09.066 00:40:09.066 --- 10.0.0.2 ping statistics --- 00:40:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:09.066 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:09.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:09.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:40:09.066 00:40:09.066 --- 10.0.0.1 ping statistics --- 00:40:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:09.066 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3191006 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3191006 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3191006 ']' 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:09.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:09.066 20:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:09.066 [2024-10-13 20:09:58.700308] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:09.066 [2024-10-13 20:09:58.703089] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:09.066 [2024-10-13 20:09:58.703202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:09.066 [2024-10-13 20:09:58.843679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:09.324 [2024-10-13 20:09:58.969257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:09.324 [2024-10-13 20:09:58.969338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:09.324 [2024-10-13 20:09:58.969363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:09.324 [2024-10-13 20:09:58.969381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:09.324 [2024-10-13 20:09:58.969420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:09.324 [2024-10-13 20:09:58.971949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:09.324 [2024-10-13 20:09:58.972011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:09.324 [2024-10-13 20:09:58.972052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.324 [2024-10-13 20:09:58.972076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:09.324 [2024-10-13 20:09:58.972819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.258 20:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:10.258 [2024-10-13 20:09:59.996501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:10.258 [2024-10-13 20:09:59.997662] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.258 [2024-10-13 20:09:59.998846] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:10.258 [2024-10-13 20:09:59.999996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:10.258 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.258 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:10.258 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.258 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:10.258 [2024-10-13 20:10:00.005122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:10.258 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.258 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:10.258 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.258 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:10.516 Malloc0 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:10.516 [2024-10-13 20:10:00.133371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3191284 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3191286 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:10.516 { 00:40:10.516 "params": { 00:40:10.516 "name": "Nvme$subsystem", 00:40:10.516 "trtype": "$TEST_TRANSPORT", 00:40:10.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:10.516 "adrfam": "ipv4", 00:40:10.516 "trsvcid": "$NVMF_PORT", 00:40:10.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:10.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:10.516 "hdgst": ${hdgst:-false}, 00:40:10.516 "ddgst": ${ddgst:-false} 00:40:10.516 }, 00:40:10.516 "method": "bdev_nvme_attach_controller" 00:40:10.516 } 00:40:10.516 EOF 00:40:10.516 )") 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3191288 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:40:10.516 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:10.517 { 00:40:10.517 "params": { 00:40:10.517 "name": "Nvme$subsystem", 00:40:10.517 "trtype": "$TEST_TRANSPORT", 00:40:10.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:10.517 "adrfam": "ipv4", 00:40:10.517 "trsvcid": "$NVMF_PORT", 00:40:10.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:10.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:10.517 "hdgst": ${hdgst:-false}, 00:40:10.517 "ddgst": ${ddgst:-false} 00:40:10.517 }, 00:40:10.517 "method": "bdev_nvme_attach_controller" 00:40:10.517 } 00:40:10.517 EOF 00:40:10.517 )") 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3191291 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:10.517 { 00:40:10.517 "params": { 00:40:10.517 "name": "Nvme$subsystem", 00:40:10.517 "trtype": "$TEST_TRANSPORT", 00:40:10.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:10.517 "adrfam": "ipv4", 00:40:10.517 "trsvcid": "$NVMF_PORT", 00:40:10.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:10.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:10.517 "hdgst": ${hdgst:-false}, 00:40:10.517 "ddgst": ${ddgst:-false} 00:40:10.517 }, 00:40:10.517 "method": "bdev_nvme_attach_controller" 00:40:10.517 } 00:40:10.517 EOF 00:40:10.517 )") 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:10.517 { 00:40:10.517 "params": { 00:40:10.517 "name": "Nvme$subsystem", 00:40:10.517 "trtype": "$TEST_TRANSPORT", 00:40:10.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:10.517 "adrfam": "ipv4", 00:40:10.517 "trsvcid": "$NVMF_PORT", 00:40:10.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:10.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:10.517 "hdgst": ${hdgst:-false}, 00:40:10.517 "ddgst": ${ddgst:-false} 00:40:10.517 }, 00:40:10.517 "method": "bdev_nvme_attach_controller" 00:40:10.517 } 00:40:10.517 EOF 00:40:10.517 )") 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3191284 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:10.517 "params": { 00:40:10.517 "name": "Nvme1", 00:40:10.517 "trtype": "tcp", 00:40:10.517 "traddr": "10.0.0.2", 00:40:10.517 "adrfam": "ipv4", 00:40:10.517 "trsvcid": "4420", 00:40:10.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:10.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:10.517 "hdgst": false, 00:40:10.517 "ddgst": false 00:40:10.517 }, 00:40:10.517 "method": "bdev_nvme_attach_controller" 00:40:10.517 }' 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:10.517 "params": { 00:40:10.517 "name": "Nvme1", 00:40:10.517 "trtype": "tcp", 00:40:10.517 "traddr": "10.0.0.2", 00:40:10.517 "adrfam": "ipv4", 00:40:10.517 "trsvcid": "4420", 00:40:10.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:10.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:10.517 "hdgst": false, 00:40:10.517 "ddgst": false 00:40:10.517 }, 00:40:10.517 "method": "bdev_nvme_attach_controller" 00:40:10.517 }' 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:10.517 "params": { 00:40:10.517 "name": "Nvme1", 00:40:10.517 "trtype": "tcp", 00:40:10.517 "traddr": "10.0.0.2", 00:40:10.517 "adrfam": "ipv4", 00:40:10.517 "trsvcid": "4420", 00:40:10.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:10.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:10.517 "hdgst": false, 00:40:10.517 "ddgst": false 00:40:10.517 }, 00:40:10.517 "method": "bdev_nvme_attach_controller" 00:40:10.517 }' 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:40:10.517 20:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:10.517 "params": { 00:40:10.517 "name": "Nvme1", 00:40:10.517 "trtype": "tcp", 00:40:10.517 "traddr": "10.0.0.2", 00:40:10.517 "adrfam": "ipv4", 00:40:10.517 "trsvcid": "4420", 00:40:10.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:10.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:10.517 "hdgst": false, 00:40:10.517 "ddgst": false 00:40:10.517 }, 00:40:10.517 "method": "bdev_nvme_attach_controller" 00:40:10.517 }' 00:40:10.517 [2024-10-13 20:10:00.221949] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:10.517 [2024-10-13 20:10:00.221949] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:10.517 [2024-10-13 20:10:00.222093] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-10-13 20:10:00.222096] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:40:10.517 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:10.517 [2024-10-13 20:10:00.222982] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:10.517 [2024-10-13 20:10:00.222981] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:10.517 [2024-10-13 20:10:00.223108] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-13 20:10:00.223108] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:10.517 --proc-type=auto ] 00:40:10.775 [2024-10-13 20:10:00.463212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.775 [2024-10-13 20:10:00.568991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.775 [2024-10-13 20:10:00.585365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:11.033 [2024-10-13 20:10:00.640440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.033 [2024-10-13 20:10:00.691605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:11.033 [2024-10-13 20:10:00.714616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.033 [2024-10-13 20:10:00.757454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:11.033 [2024-10-13 20:10:00.830618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:11.290 Running I/O for 1 seconds... 00:40:11.290 Running I/O for 1 seconds... 00:40:11.546 Running I/O for 1 seconds... 00:40:11.546 Running I/O for 1 seconds... 00:40:12.477 8056.00 IOPS, 31.47 MiB/s 00:40:12.477 Latency(us) 00:40:12.477 [2024-10-13T18:10:02.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.477 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:12.477 Nvme1n1 : 1.01 8092.36 31.61 0.00 0.00 15721.37 5606.97 20194.80 00:40:12.477 [2024-10-13T18:10:02.292Z] =================================================================================================================== 00:40:12.477 [2024-10-13T18:10:02.292Z] Total : 8092.36 31.61 0.00 0.00 15721.37 5606.97 20194.80 00:40:12.477 6194.00 IOPS, 24.20 MiB/s 00:40:12.477 Latency(us) 00:40:12.477 [2024-10-13T18:10:02.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.477 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:12.477 Nvme1n1 : 1.01 6255.75 24.44 0.00 0.00 20337.50 2706.39 29903.83 00:40:12.477 [2024-10-13T18:10:02.292Z] =================================================================================================================== 00:40:12.477 [2024-10-13T18:10:02.292Z] Total : 6255.75 24.44 0.00 0.00 20337.50 2706.39 29903.83 00:40:12.734 6450.00 IOPS, 25.20 MiB/s 00:40:12.734 Latency(us) 00:40:12.734 [2024-10-13T18:10:02.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.734 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:12.734 Nvme1n1 : 1.01 6522.20 25.48 0.00 0.00 19521.67 7718.68 31457.28 00:40:12.734 [2024-10-13T18:10:02.549Z] =================================================================================================================== 00:40:12.734 [2024-10-13T18:10:02.549Z] Total : 6522.20 25.48 0.00 0.00 19521.67 7718.68 31457.28 00:40:12.734 144432.00 IOPS, 564.19 MiB/s 00:40:12.734 Latency(us) 00:40:12.734 [2024-10-13T18:10:02.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.734 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:12.734 Nvme1n1 : 1.00 144131.13 563.01 0.00 0.00 883.54 391.40 2051.03 00:40:12.734 [2024-10-13T18:10:02.549Z] =================================================================================================================== 00:40:12.734 [2024-10-13T18:10:02.549Z] Total : 144131.13 563.01 0.00 0.00 883.54 391.40 2051.03 00:40:12.992 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3191286 00:40:12.992 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3191288 00:40:13.250 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3191291 00:40:13.250 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:13.250 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:13.250 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:13.250 rmmod nvme_tcp 00:40:13.250 rmmod nvme_fabrics 00:40:13.250 rmmod nvme_keyring 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3191006 ']' 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3191006 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3191006 ']' 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3191006 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:13.250 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3191006 00:40:13.507 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:13.507 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:13.507 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3191006' 00:40:13.507 killing process with pid 3191006 00:40:13.507 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3191006 00:40:13.507 20:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3191006 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:14.441 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:14.442 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:14.442 20:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:16.973 00:40:16.973 real 0m9.729s 00:40:16.973 user 0m21.950s 00:40:16.973 sys 0m4.698s 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:16.973 ************************************ 00:40:16.973 END TEST nvmf_bdev_io_wait 00:40:16.973 ************************************ 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:16.973 ************************************ 00:40:16.973 START TEST nvmf_queue_depth 00:40:16.973 ************************************ 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:16.973 * Looking for test storage... 00:40:16.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:16.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.973 --rc genhtml_branch_coverage=1 00:40:16.973 --rc genhtml_function_coverage=1 00:40:16.973 --rc genhtml_legend=1 00:40:16.973 --rc geninfo_all_blocks=1 00:40:16.973 --rc geninfo_unexecuted_blocks=1 00:40:16.973 00:40:16.973 ' 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:16.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.973 --rc genhtml_branch_coverage=1 00:40:16.973 --rc genhtml_function_coverage=1 00:40:16.973 --rc genhtml_legend=1 00:40:16.973 --rc geninfo_all_blocks=1 00:40:16.973 --rc geninfo_unexecuted_blocks=1 00:40:16.973 00:40:16.973 ' 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:16.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.973 --rc genhtml_branch_coverage=1 00:40:16.973 --rc genhtml_function_coverage=1 00:40:16.973 --rc genhtml_legend=1 00:40:16.973 --rc geninfo_all_blocks=1 00:40:16.973 --rc geninfo_unexecuted_blocks=1 00:40:16.973 00:40:16.973 ' 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:16.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.973 --rc genhtml_branch_coverage=1 00:40:16.973 --rc genhtml_function_coverage=1 00:40:16.973 --rc genhtml_legend=1 00:40:16.973 --rc geninfo_all_blocks=1 00:40:16.973 --rc geninfo_unexecuted_blocks=1 00:40:16.973 00:40:16.973 ' 00:40:16.973 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:16.974 20:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:18.883 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:18.883 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:18.883 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:18.883 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:18.883 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:18.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:18.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:40:18.884 00:40:18.884 --- 10.0.0.2 ping statistics --- 00:40:18.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:18.884 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:18.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:18.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:40:18.884 00:40:18.884 --- 10.0.0.1 ping statistics --- 00:40:18.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:18.884 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3193647 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3193647 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3193647 ']' 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:18.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:18.884 20:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.884 [2024-10-13 20:10:08.674099] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:18.884 [2024-10-13 20:10:08.676511] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:18.884 [2024-10-13 20:10:08.676612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:19.142 [2024-10-13 20:10:08.812534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.142 [2024-10-13 20:10:08.934458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:19.142 [2024-10-13 20:10:08.934557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:19.143 [2024-10-13 20:10:08.934584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:19.143 [2024-10-13 20:10:08.934603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:19.143 [2024-10-13 20:10:08.934623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:19.143 [2024-10-13 20:10:08.936092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:19.709 [2024-10-13 20:10:09.258732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:19.709 [2024-10-13 20:10:09.259144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:19.967 [2024-10-13 20:10:09.717249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.967 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:20.225 Malloc0 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:20.225 [2024-10-13 20:10:09.833421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.225 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3193801 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3193801 /var/tmp/bdevperf.sock 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3193801 ']' 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:20.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:20.226 20:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:20.226 [2024-10-13 20:10:09.927932] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:20.226 [2024-10-13 20:10:09.928081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193801 ] 00:40:20.484 [2024-10-13 20:10:10.074034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.484 [2024-10-13 20:10:10.210782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.418 20:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:21.418 20:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:21.418 20:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:21.418 20:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.418 20:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:21.418 NVMe0n1 00:40:21.418 20:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.418 20:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:21.418 Running I/O for 10 seconds... 00:40:23.724 6135.00 IOPS, 23.96 MiB/s [2024-10-13T18:10:14.473Z] 6041.50 IOPS, 23.60 MiB/s [2024-10-13T18:10:15.407Z] 6050.33 IOPS, 23.63 MiB/s [2024-10-13T18:10:16.341Z] 6009.75 IOPS, 23.48 MiB/s [2024-10-13T18:10:17.273Z] 6029.00 IOPS, 23.55 MiB/s [2024-10-13T18:10:18.207Z] 6088.33 IOPS, 23.78 MiB/s [2024-10-13T18:10:19.143Z] 6078.86 IOPS, 23.75 MiB/s [2024-10-13T18:10:20.519Z] 6076.75 IOPS, 23.74 MiB/s [2024-10-13T18:10:21.453Z] 6067.44 IOPS, 23.70 MiB/s [2024-10-13T18:10:21.453Z] 6058.90 IOPS, 23.67 MiB/s 00:40:31.638 Latency(us) 00:40:31.638 [2024-10-13T18:10:21.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.638 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:31.638 Verification LBA range: start 0x0 length 0x4000 00:40:31.638 NVMe0n1 : 10.10 6101.49 23.83 0.00 0.00 166815.29 11990.66 97090.37 00:40:31.638 [2024-10-13T18:10:21.453Z] =================================================================================================================== 00:40:31.638 [2024-10-13T18:10:21.453Z] Total : 6101.49 23.83 0.00 0.00 166815.29 11990.66 97090.37 00:40:31.638 { 00:40:31.638 "results": [ 00:40:31.638 { 00:40:31.638 "job": "NVMe0n1", 00:40:31.638 "core_mask": "0x1", 00:40:31.638 "workload": "verify", 00:40:31.638 "status": "finished", 00:40:31.638 "verify_range": { 00:40:31.638 "start": 0, 00:40:31.638 "length": 16384 00:40:31.638 }, 00:40:31.638 "queue_depth": 1024, 00:40:31.638 "io_size": 4096, 00:40:31.638 "runtime": 10.09803, 00:40:31.638 "iops": 6101.487121745528, 00:40:31.638 "mibps": 23.83393406931847, 00:40:31.638 "io_failed": 0, 00:40:31.638 "io_timeout": 0, 00:40:31.638 "avg_latency_us": 166815.28612506617, 00:40:31.638 "min_latency_us": 11990.660740740741, 00:40:31.638 "max_latency_us": 97090.37037037036 00:40:31.638 } 00:40:31.638 ], 00:40:31.638 "core_count": 1 00:40:31.638 } 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3193801 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3193801 ']' 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3193801 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3193801 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3193801' 00:40:31.638 killing process with pid 3193801 00:40:31.638 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3193801 00:40:31.638 Received shutdown signal, test time was about 10.000000 seconds 00:40:31.638 00:40:31.638 Latency(us) 00:40:31.638 [2024-10-13T18:10:21.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.638 [2024-10-13T18:10:21.453Z] =================================================================================================================== 00:40:31.638 [2024-10-13T18:10:21.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:31.639 20:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3193801 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:32.613 rmmod nvme_tcp 00:40:32.613 rmmod nvme_fabrics 00:40:32.613 rmmod nvme_keyring 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3193647 ']' 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3193647 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3193647 ']' 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3193647 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3193647 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3193647' 00:40:32.613 killing process with pid 3193647 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3193647 00:40:32.613 20:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3193647 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.013 20:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:35.915 00:40:35.915 real 0m19.288s 00:40:35.915 user 0m26.586s 00:40:35.915 sys 0m3.781s 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:35.915 ************************************ 00:40:35.915 END TEST nvmf_queue_depth 00:40:35.915 ************************************ 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:35.915 ************************************ 00:40:35.915 START TEST nvmf_target_multipath 00:40:35.915 ************************************ 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:35.915 * Looking for test storage... 00:40:35.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:40:35.915 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.174 --rc genhtml_branch_coverage=1 00:40:36.174 --rc genhtml_function_coverage=1 00:40:36.174 --rc genhtml_legend=1 00:40:36.174 --rc geninfo_all_blocks=1 00:40:36.174 --rc geninfo_unexecuted_blocks=1 00:40:36.174 00:40:36.174 ' 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.174 --rc genhtml_branch_coverage=1 00:40:36.174 --rc genhtml_function_coverage=1 00:40:36.174 --rc genhtml_legend=1 00:40:36.174 --rc geninfo_all_blocks=1 00:40:36.174 --rc geninfo_unexecuted_blocks=1 00:40:36.174 00:40:36.174 ' 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.174 --rc genhtml_branch_coverage=1 00:40:36.174 --rc genhtml_function_coverage=1 00:40:36.174 --rc genhtml_legend=1 00:40:36.174 --rc geninfo_all_blocks=1 00:40:36.174 --rc geninfo_unexecuted_blocks=1 00:40:36.174 00:40:36.174 ' 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.174 --rc genhtml_branch_coverage=1 00:40:36.174 --rc genhtml_function_coverage=1 00:40:36.174 --rc genhtml_legend=1 00:40:36.174 --rc geninfo_all_blocks=1 00:40:36.174 --rc geninfo_unexecuted_blocks=1 00:40:36.174 00:40:36.174 ' 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.174 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:36.175 20:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:38.075 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:38.075 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:38.075 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:38.075 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:38.075 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:38.075 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:38.075 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:38.075 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:38.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:38.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:38.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:38.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:38.076 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:38.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:38.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:40:38.334 00:40:38.334 --- 10.0.0.2 ping statistics --- 00:40:38.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.334 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:38.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:38.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:40:38.334 00:40:38.334 --- 10.0.0.1 ping statistics --- 00:40:38.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.334 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:38.334 only one NIC for nvmf test 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:38.334 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:38.334 rmmod nvme_tcp 00:40:38.334 rmmod nvme_fabrics 00:40:38.334 rmmod nvme_keyring 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.334 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:40.865 00:40:40.865 real 0m4.488s 00:40:40.865 user 0m0.867s 00:40:40.865 sys 0m1.572s 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:40.865 ************************************ 00:40:40.865 END TEST nvmf_target_multipath 00:40:40.865 ************************************ 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:40.865 ************************************ 00:40:40.865 START TEST nvmf_zcopy 00:40:40.865 ************************************ 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:40.865 * Looking for test storage... 00:40:40.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:40.865 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:40.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.866 --rc genhtml_branch_coverage=1 00:40:40.866 --rc genhtml_function_coverage=1 00:40:40.866 --rc genhtml_legend=1 00:40:40.866 --rc geninfo_all_blocks=1 00:40:40.866 --rc geninfo_unexecuted_blocks=1 00:40:40.866 00:40:40.866 ' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:40.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.866 --rc genhtml_branch_coverage=1 00:40:40.866 --rc genhtml_function_coverage=1 00:40:40.866 --rc genhtml_legend=1 00:40:40.866 --rc geninfo_all_blocks=1 00:40:40.866 --rc geninfo_unexecuted_blocks=1 00:40:40.866 00:40:40.866 ' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:40.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.866 --rc genhtml_branch_coverage=1 00:40:40.866 --rc genhtml_function_coverage=1 00:40:40.866 --rc genhtml_legend=1 00:40:40.866 --rc geninfo_all_blocks=1 00:40:40.866 --rc geninfo_unexecuted_blocks=1 00:40:40.866 00:40:40.866 ' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:40.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.866 --rc genhtml_branch_coverage=1 00:40:40.866 --rc genhtml_function_coverage=1 00:40:40.866 --rc genhtml_legend=1 00:40:40.866 --rc geninfo_all_blocks=1 00:40:40.866 --rc geninfo_unexecuted_blocks=1 00:40:40.866 00:40:40.866 ' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:40.866 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:42.768 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:42.769 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:42.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:42.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:42.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:42.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:42.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:40:42.769 00:40:42.769 --- 10.0.0.2 ping statistics --- 00:40:42.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:42.769 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:42.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:42.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:40:42.769 00:40:42.769 --- 10.0.0.1 ping statistics --- 00:40:42.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:42.769 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:42.769 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3199235 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3199235 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3199235 ']' 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:42.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:42.770 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:42.770 [2024-10-13 20:10:32.401099] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:42.770 [2024-10-13 20:10:32.403807] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:42.770 [2024-10-13 20:10:32.403908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:42.770 [2024-10-13 20:10:32.551157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.028 [2024-10-13 20:10:32.687195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:43.028 [2024-10-13 20:10:32.687270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:43.028 [2024-10-13 20:10:32.687298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:43.028 [2024-10-13 20:10:32.687320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:43.028 [2024-10-13 20:10:32.687341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:43.028 [2024-10-13 20:10:32.688959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:43.287 [2024-10-13 20:10:33.052944] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:43.287 [2024-10-13 20:10:33.053390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:43.545 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:43.545 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:40:43.545 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:43.545 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:43.545 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:43.803 [2024-10-13 20:10:33.382022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:43.803 [2024-10-13 20:10:33.398196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.803 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:43.804 malloc0 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:43.804 { 00:40:43.804 "params": { 00:40:43.804 "name": "Nvme$subsystem", 00:40:43.804 "trtype": "$TEST_TRANSPORT", 00:40:43.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:43.804 "adrfam": "ipv4", 00:40:43.804 "trsvcid": "$NVMF_PORT", 00:40:43.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:43.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:43.804 "hdgst": ${hdgst:-false}, 00:40:43.804 "ddgst": ${ddgst:-false} 00:40:43.804 }, 00:40:43.804 "method": "bdev_nvme_attach_controller" 00:40:43.804 } 00:40:43.804 EOF 00:40:43.804 )") 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:40:43.804 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:43.804 "params": { 00:40:43.804 "name": "Nvme1", 00:40:43.804 "trtype": "tcp", 00:40:43.804 "traddr": "10.0.0.2", 00:40:43.804 "adrfam": "ipv4", 00:40:43.804 "trsvcid": "4420", 00:40:43.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:43.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:43.804 "hdgst": false, 00:40:43.804 "ddgst": false 00:40:43.804 }, 00:40:43.804 "method": "bdev_nvme_attach_controller" 00:40:43.804 }' 00:40:43.804 [2024-10-13 20:10:33.542859] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:43.804 [2024-10-13 20:10:33.542985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199389 ] 00:40:44.062 [2024-10-13 20:10:33.676546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:44.062 [2024-10-13 20:10:33.813200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.629 Running I/O for 10 seconds... 00:40:46.938 3440.00 IOPS, 26.88 MiB/s [2024-10-13T18:10:37.687Z] 3480.50 IOPS, 27.19 MiB/s [2024-10-13T18:10:38.620Z] 3476.00 IOPS, 27.16 MiB/s [2024-10-13T18:10:39.554Z] 3469.00 IOPS, 27.10 MiB/s [2024-10-13T18:10:40.487Z] 3462.20 IOPS, 27.05 MiB/s [2024-10-13T18:10:41.421Z] 3470.50 IOPS, 27.11 MiB/s [2024-10-13T18:10:42.794Z] 3466.57 IOPS, 27.08 MiB/s [2024-10-13T18:10:43.728Z] 3465.12 IOPS, 27.07 MiB/s [2024-10-13T18:10:44.662Z] 3465.89 IOPS, 27.08 MiB/s [2024-10-13T18:10:44.662Z] 3462.80 IOPS, 27.05 MiB/s 00:40:54.847 Latency(us) 00:40:54.847 [2024-10-13T18:10:44.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.847 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:54.847 Verification LBA range: start 0x0 length 0x1000 00:40:54.847 Nvme1n1 : 10.03 3464.40 27.07 0.00 0.00 36846.43 4320.52 48351.00 00:40:54.847 [2024-10-13T18:10:44.662Z] =================================================================================================================== 00:40:54.847 [2024-10-13T18:10:44.662Z] Total : 3464.40 27.07 0.00 0.00 36846.43 4320.52 48351.00 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3200699 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:55.782 { 00:40:55.782 "params": { 00:40:55.782 "name": "Nvme$subsystem", 00:40:55.782 "trtype": "$TEST_TRANSPORT", 00:40:55.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:55.782 "adrfam": "ipv4", 00:40:55.782 "trsvcid": "$NVMF_PORT", 00:40:55.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:55.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:55.782 "hdgst": ${hdgst:-false}, 00:40:55.782 "ddgst": ${ddgst:-false} 00:40:55.782 }, 00:40:55.782 "method": "bdev_nvme_attach_controller" 00:40:55.782 } 00:40:55.782 EOF 00:40:55.782 )") 00:40:55.782 [2024-10-13 20:10:45.341954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:40:55.782 [2024-10-13 20:10:45.342019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:40:55.782 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:55.782 "params": { 00:40:55.782 "name": "Nvme1", 00:40:55.782 "trtype": "tcp", 00:40:55.782 "traddr": "10.0.0.2", 00:40:55.782 "adrfam": "ipv4", 00:40:55.782 "trsvcid": "4420", 00:40:55.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:55.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:55.782 "hdgst": false, 00:40:55.782 "ddgst": false 00:40:55.782 }, 00:40:55.782 "method": "bdev_nvme_attach_controller" 00:40:55.782 }' 00:40:55.782 [2024-10-13 20:10:45.349821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.349856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.357795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.357829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.365796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.365830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.373810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.373844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.381810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.381842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.389799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.389832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.397800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.397831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.405782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.405814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.413805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.782 [2024-10-13 20:10:45.413838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.782 [2024-10-13 20:10:45.421449] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:40:55.782 [2024-10-13 20:10:45.421569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200699 ] 00:40:55.783 [2024-10-13 20:10:45.421775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.421806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.429794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.429826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.437792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.437824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.445776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.445812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.453811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.453843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.461799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.461830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.469782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.469813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.477807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.477838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.485776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.485807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.493793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.493825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.501829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.501861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.509812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.509843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.517803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.517835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.525784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.525829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.533787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.533818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.541813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.541846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.549792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.549824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.552389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.783 [2024-10-13 20:10:45.557810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.557842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.565812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.565846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.573913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.573971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.581830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.581867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:55.783 [2024-10-13 20:10:45.589800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:55.783 [2024-10-13 20:10:45.589844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.041 [2024-10-13 20:10:45.597791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.041 [2024-10-13 20:10:45.597825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.041 [2024-10-13 20:10:45.605809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.041 [2024-10-13 20:10:45.605841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.041 [2024-10-13 20:10:45.613783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.041 [2024-10-13 20:10:45.613815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.041 [2024-10-13 20:10:45.621807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.041 [2024-10-13 20:10:45.621839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.041 [2024-10-13 20:10:45.629809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.041 [2024-10-13 20:10:45.629841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.041 [2024-10-13 20:10:45.637797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.041 [2024-10-13 20:10:45.637829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.041 [2024-10-13 20:10:45.645818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.041 [2024-10-13 20:10:45.645851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.653804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.653836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.661791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.661822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.669827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.669858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.677796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.677827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.685814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.685846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.692561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.042 [2024-10-13 20:10:45.693802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.693833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.701788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.701819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.709895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.709944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.717909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.717966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.725790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.725823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.733823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.733854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.741793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.741836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.749814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.749845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.757798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.757829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.765809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.765841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.773810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.773841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.781833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.781873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.789873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.789924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.797896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.797951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.805902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.805961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.813921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.813977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.821803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.821835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.829792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.829823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.837811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.837842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.845829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.845874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.042 [2024-10-13 20:10:45.853784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.042 [2024-10-13 20:10:45.853816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.861832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.861864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.869791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.869822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.877806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.877838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.885807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.885838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.893787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.893819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.901808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.901839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.909800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.909831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.917788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.917819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.925809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.925840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.933790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.933822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.941850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.941892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.949905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.949960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.957925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.957981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.965830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.965866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.973807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.973839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.981786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.981817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.989804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.989836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:45.997788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:45.997818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.005812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.005844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.013807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.013839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.021800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.021832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.029805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.029836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.037802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.037834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.045790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.045822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.053833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.053866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.061787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.061819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.069817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.069851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.077824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.077861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.085837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.085874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.093822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.093858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.101824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.101861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.301 [2024-10-13 20:10:46.109820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.301 [2024-10-13 20:10:46.109853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.117809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.117842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.125791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.125824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.133807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.133839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.141825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.141859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.149799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.149836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.157822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.157859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.165815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.165850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.189805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.189845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.197812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.197846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 Running I/O for 5 seconds... 00:40:56.560 [2024-10-13 20:10:46.214335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.214376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.227603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.227638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.244780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.244819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.261257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.261296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.276703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.276755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.292422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.292475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.308357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.308405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.324509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.324542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.339879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.339918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.356287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.356326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.560 [2024-10-13 20:10:46.372036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.560 [2024-10-13 20:10:46.372076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.387428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.387465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.403628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.403664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.420319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.420358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.436890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.436929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.452762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.452802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.468197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.468237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.483851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.483889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.499644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.499687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.516246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.516297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.531926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.531966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.547660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.547708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.563682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.563732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.581196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.581235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.597508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.597542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.613346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.613386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:56.819 [2024-10-13 20:10:46.629533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:56.819 [2024-10-13 20:10:46.629567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.645486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.645520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.661715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.661748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.677804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.677844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.693159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.693198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.709173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.709212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.725503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.725536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.741526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.741560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.756388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.756453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.771911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.771949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.788068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.788106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.804278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.804317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.820201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.820249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.835935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.835974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.852046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.852085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.868185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.868224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.077 [2024-10-13 20:10:46.883409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.077 [2024-10-13 20:10:46.883459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:46.899658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:46.899707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:46.915497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:46.915531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:46.930909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:46.930949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:46.946459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:46.946494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:46.962005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:46.962044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:46.976986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:46.977025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:46.991599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:46.991649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.007959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.007998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.023832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.023872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.040117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.040157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.056228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.056267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.072050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.072091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.087923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.087963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.103690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.103723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.118925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.118974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.134421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.134474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.336 [2024-10-13 20:10:47.150723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.336 [2024-10-13 20:10:47.150777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.594 [2024-10-13 20:10:47.167243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.594 [2024-10-13 20:10:47.167285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.594 [2024-10-13 20:10:47.184411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.594 [2024-10-13 20:10:47.184450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.594 [2024-10-13 20:10:47.200837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.594 [2024-10-13 20:10:47.200877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.594 7927.00 IOPS, 61.93 MiB/s [2024-10-13T18:10:47.409Z] [2024-10-13 20:10:47.217075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.594 [2024-10-13 20:10:47.217114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.594 [2024-10-13 20:10:47.233154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.594 [2024-10-13 20:10:47.233193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.594 [2024-10-13 20:10:47.249198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.594 [2024-10-13 20:10:47.249236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.594 [2024-10-13 20:10:47.265142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.594 [2024-10-13 20:10:47.265181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.594 [2024-10-13 20:10:47.280982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.281021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.595 [2024-10-13 20:10:47.296988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.297026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.595 [2024-10-13 20:10:47.313663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.313710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.595 [2024-10-13 20:10:47.329580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.329615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.595 [2024-10-13 20:10:47.345244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.345282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.595 [2024-10-13 20:10:47.362130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.362168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.595 [2024-10-13 20:10:47.375911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.375950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.595 [2024-10-13 20:10:47.391992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.392031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.595 [2024-10-13 20:10:47.408045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.595 [2024-10-13 20:10:47.408086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.424403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.424459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.440862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.440901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.457222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.457261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.473292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.473331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.488802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.488841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.504624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.504658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.519949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.519989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.536008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.536046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.551900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.551941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.567661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.567715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.582985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.583024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.600038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.600078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.615887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.615927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.632253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.632292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.648326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.648365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:57.853 [2024-10-13 20:10:47.663608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:57.853 [2024-10-13 20:10:47.663641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.679308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.679348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.694019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.694058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.710048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.710087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.724692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.724724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.740322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.740361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.756008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.756047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.771416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.771465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.786993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.787032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.802385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.802448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.819180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.819218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.835900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.835938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.852158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.852196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.868552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.868585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.884484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.884517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.900256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.900295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.111 [2024-10-13 20:10:47.916506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.111 [2024-10-13 20:10:47.916540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:47.932920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:47.932960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:47.948291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:47.948330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:47.963601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:47.963635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:47.980205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:47.980243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:47.997042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:47.997081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.012945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.012984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.028386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.028449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.043978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.044016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.059680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.059734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.075652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.075686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.091618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.091651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.107521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.107555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.123554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.123588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.139407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.139459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.155084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.155122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.369 [2024-10-13 20:10:48.170755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.369 [2024-10-13 20:10:48.170793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.186616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.186651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.203006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.203045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 7950.50 IOPS, 62.11 MiB/s [2024-10-13T18:10:48.442Z] [2024-10-13 20:10:48.218475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.218525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.234100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.234139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.250418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.250470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.265026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.265065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.281589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.281624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.298058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.298097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.315075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.315125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.331715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.331765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.347994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.627 [2024-10-13 20:10:48.348033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.627 [2024-10-13 20:10:48.363338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.628 [2024-10-13 20:10:48.363388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.628 [2024-10-13 20:10:48.379029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.628 [2024-10-13 20:10:48.379068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.628 [2024-10-13 20:10:48.394311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.628 [2024-10-13 20:10:48.394350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.628 [2024-10-13 20:10:48.410656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.628 [2024-10-13 20:10:48.410705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.628 [2024-10-13 20:10:48.426746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.628 [2024-10-13 20:10:48.426785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.443339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.443379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.458588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.458623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.474211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.474250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.490714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.490764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.506681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.506715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.522124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.522162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.538124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.538163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.553164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.553203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.569456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.569488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.585475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.585525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.601237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.601276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.617281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.617330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.632823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.632863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.648358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.648407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.663754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.663793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.679639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.679689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:58.886 [2024-10-13 20:10:48.696327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:58.886 [2024-10-13 20:10:48.696368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.712894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.712934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.729190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.729229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.745540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.745573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.761497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.761530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.777797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.777837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.793876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.793916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.809122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.809160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.824879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.824918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.841127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.841167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.144 [2024-10-13 20:10:48.857024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.144 [2024-10-13 20:10:48.857063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.145 [2024-10-13 20:10:48.872708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.145 [2024-10-13 20:10:48.872761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.145 [2024-10-13 20:10:48.889164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.145 [2024-10-13 20:10:48.889204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.145 [2024-10-13 20:10:48.904331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.145 [2024-10-13 20:10:48.904370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.145 [2024-10-13 20:10:48.920497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.145 [2024-10-13 20:10:48.920560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.145 [2024-10-13 20:10:48.935917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.145 [2024-10-13 20:10:48.935955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.145 [2024-10-13 20:10:48.950923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.145 [2024-10-13 20:10:48.950961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:48.966372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:48.966421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:48.982499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:48.982532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:48.997120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:48.997158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.012379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.012430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.027555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.027604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.043605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.043639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.059813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.059852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.076042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.076080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.092235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.092273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.108986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.109025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.125222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.125261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.141753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.141792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.158415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.158465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.172650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.172697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.190135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.190174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.403 [2024-10-13 20:10:49.204061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.403 [2024-10-13 20:10:49.204100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 7960.67 IOPS, 62.19 MiB/s [2024-10-13T18:10:49.476Z] [2024-10-13 20:10:49.219916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.219956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.235854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.235893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.252018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.252056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.267552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.267586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.283274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.283312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.298794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.298834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.314256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.314295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.329664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.329712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.346178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.346219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.361731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.361771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.378389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.378466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.394772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.394808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.410708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.410760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.425716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.425758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.441665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.441715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.457603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.457642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.661 [2024-10-13 20:10:49.474712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.661 [2024-10-13 20:10:49.474763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.491179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.491218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.507380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.507443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.523555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.523588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.539030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.539068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.555224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.555262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.571491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.571524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.587074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.587112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.603304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.603343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.619300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.619339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.635533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.635566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.651178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.651216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.667607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.667640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.683248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.683287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.699696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.699728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.715390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.715440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:59.919 [2024-10-13 20:10:49.731165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:59.919 [2024-10-13 20:10:49.731204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.746957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.746997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.763309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.763349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.778867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.778906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.793560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.793594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.809081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.809121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.825288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.825327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.840717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.840755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.856437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.856487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.872476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.872511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.888528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.888562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.904387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.904450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.920444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.920478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.936210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.936248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.951847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.951887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.967843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.967882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.178 [2024-10-13 20:10:49.983338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.178 [2024-10-13 20:10:49.983377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:49.998729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:49.998768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.015237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.015300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.030903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.030939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.047091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.047141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.064258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.064299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.079759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.079805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.096426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.096465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.113309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.113352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.129299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.129338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.145468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.145503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.161313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.161353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.176044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.176083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.191800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.191839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 [2024-10-13 20:10:50.207970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.208009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.436 7958.75 IOPS, 62.18 MiB/s [2024-10-13T18:10:50.251Z] [2024-10-13 20:10:50.224434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.436 [2024-10-13 20:10:50.224485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.437 [2024-10-13 20:10:50.240865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.437 [2024-10-13 20:10:50.240904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.256654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.256704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.273274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.273313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.290189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.290228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.303930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.303964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.319437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.319471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.335166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.335205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.350960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.351010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.366164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.366202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.382195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.382229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.397902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.397942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.413922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.413972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.429464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.429503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.446790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.446832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.464030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.464070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.481542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.481578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.694 [2024-10-13 20:10:50.498890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.694 [2024-10-13 20:10:50.498931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.515883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.515924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.533589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.533622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.549277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.549327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.566548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.566597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.583671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.583734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.600858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.600907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.617129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.617168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.633213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.633252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.649535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.649570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.666588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.666623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.683041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.683082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.698366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.698413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.715513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.715547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.952 [2024-10-13 20:10:50.731685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.952 [2024-10-13 20:10:50.731727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.953 [2024-10-13 20:10:50.747049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.953 [2024-10-13 20:10:50.747089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:00.953 [2024-10-13 20:10:50.762666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:00.953 [2024-10-13 20:10:50.762715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.779691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.779726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.795123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.795163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.811316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.811355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.826948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.826986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.843618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.843651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.859033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.859072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.875473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.875506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.891435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.891468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.907922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.907961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.924112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.924150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.939856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.939912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.955466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.955499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.971948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.971988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:50.987122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:50.987161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:51.002820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:51.002860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.211 [2024-10-13 20:10:51.018959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.211 [2024-10-13 20:10:51.018999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.035034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.035073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.051272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.051312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.066459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.066493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.083005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.083043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.099356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.099404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.114882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.114921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.131595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.131628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.147568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.147620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.164188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.164226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.179825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.179865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.196280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.196318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.212587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.212619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 7933.80 IOPS, 61.98 MiB/s [2024-10-13T18:10:51.284Z] [2024-10-13 20:10:51.226406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.226458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 00:41:01.469 Latency(us) 00:41:01.469 [2024-10-13T18:10:51.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:01.469 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:01.469 Nvme1n1 : 5.01 7939.81 62.03 0.00 0.00 16093.92 4053.52 26214.40 00:41:01.469 [2024-10-13T18:10:51.284Z] =================================================================================================================== 00:41:01.469 [2024-10-13T18:10:51.284Z] Total : 7939.81 62.03 0.00 0.00 16093.92 4053.52 26214.40 00:41:01.469 [2024-10-13 20:10:51.233810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.233845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.242238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.242275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.249819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.249854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.257793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.257826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.265798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.265831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.273796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.273828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.469 [2024-10-13 20:10:51.281976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.469 [2024-10-13 20:10:51.282040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.289976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.290039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.297802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.297837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.305810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.305842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.313801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.313833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.321782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.321809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.329819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.329851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.337810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.337842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.345805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.345837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.353798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.353830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.361787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.361819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.369785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.369813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.377927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.377990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.385911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.385974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.393837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.393871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.401786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.401828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.409804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.409836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.417799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.417831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.425790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.425821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.433823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.433854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.441806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.441838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.449779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.449810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.457806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.457839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.465805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.465837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.473800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.473831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.481799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.481831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.489789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.489821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.497809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.497841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.505803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.505835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.513785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.513816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.521798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.521828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.529814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.529845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.727 [2024-10-13 20:10:51.537841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.727 [2024-10-13 20:10:51.537877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.545962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.546023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.553805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.553849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.561812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.561843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.569798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.569830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.577780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.577811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.585799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.585830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.593795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.593825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.601847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.601888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.609971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.610032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.617940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.618000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.625974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.626037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.633808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.633840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.641784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.641817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.649804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.649835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.657783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.657814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.665797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.665828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.673806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.673838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.681815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.681847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.689805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.689836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.697802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.697834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.705811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.705854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.713967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.713999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.721805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.721836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.729799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.729831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.737807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.737839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.745781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.745813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.753804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.753836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.761803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.761834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.769782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.769814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.777795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.777826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.789929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.789998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:01.986 [2024-10-13 20:10:51.797821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:01.986 [2024-10-13 20:10:51.797853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.805802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.805833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.813786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.813816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.821820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.821851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.829813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.829844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.837783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.837813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.845806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.845837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.853788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.853819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.861814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.861856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.869801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.869833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.877786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.877817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.885801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.885832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.893805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.893837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.901804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.901837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.909951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.910009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.917788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.917820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.925802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.925833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.933796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.933827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.941785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.941817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.949797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.949828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.957808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.957840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.965776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.965807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.973803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.973834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.981779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.981809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.989795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.989826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:51.997881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:51.997933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:52.005823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:52.005861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:52.013822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:52.013864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:52.021799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:52.021831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:52.029784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:52.029815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:52.037796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:52.037827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:52.045795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:52.045827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.245 [2024-10-13 20:10:52.053814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.245 [2024-10-13 20:10:52.053846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.061799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.061831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.069782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.069813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.077805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.077836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.085807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.085839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.093796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.093828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.101813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.101846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.109812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.109844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.117798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.117829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 [2024-10-13 20:10:52.125817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.504 [2024-10-13 20:10:52.125848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3200699) - No such process 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3200699 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:02.504 delay0 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.504 20:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:02.504 [2024-10-13 20:10:52.290949] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:10.692 Initializing NVMe Controllers 00:41:10.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:10.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:10.692 Initialization complete. Launching workers. 00:41:10.692 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 13122 00:41:10.692 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13229, failed to submit 131 00:41:10.692 success 13144, unsuccessful 85, failed 0 00:41:10.692 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:10.692 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:10.692 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:10.692 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:10.693 rmmod nvme_tcp 00:41:10.693 rmmod nvme_fabrics 00:41:10.693 rmmod nvme_keyring 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3199235 ']' 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3199235 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3199235 ']' 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3199235 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3199235 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3199235' 00:41:10.693 killing process with pid 3199235 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3199235 00:41:10.693 20:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3199235 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:10.952 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:13.488 00:41:13.488 real 0m32.563s 00:41:13.488 user 0m45.828s 00:41:13.488 sys 0m10.677s 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:13.488 ************************************ 00:41:13.488 END TEST nvmf_zcopy 00:41:13.488 ************************************ 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:13.488 ************************************ 00:41:13.488 START TEST nvmf_nmic 00:41:13.488 ************************************ 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:13.488 * Looking for test storage... 00:41:13.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:13.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.488 --rc genhtml_branch_coverage=1 00:41:13.488 --rc genhtml_function_coverage=1 00:41:13.488 --rc genhtml_legend=1 00:41:13.488 --rc geninfo_all_blocks=1 00:41:13.488 --rc geninfo_unexecuted_blocks=1 00:41:13.488 00:41:13.488 ' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:13.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.488 --rc genhtml_branch_coverage=1 00:41:13.488 --rc genhtml_function_coverage=1 00:41:13.488 --rc genhtml_legend=1 00:41:13.488 --rc geninfo_all_blocks=1 00:41:13.488 --rc geninfo_unexecuted_blocks=1 00:41:13.488 00:41:13.488 ' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:13.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.488 --rc genhtml_branch_coverage=1 00:41:13.488 --rc genhtml_function_coverage=1 00:41:13.488 --rc genhtml_legend=1 00:41:13.488 --rc geninfo_all_blocks=1 00:41:13.488 --rc geninfo_unexecuted_blocks=1 00:41:13.488 00:41:13.488 ' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:13.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.488 --rc genhtml_branch_coverage=1 00:41:13.488 --rc genhtml_function_coverage=1 00:41:13.488 --rc genhtml_legend=1 00:41:13.488 --rc geninfo_all_blocks=1 00:41:13.488 --rc geninfo_unexecuted_blocks=1 00:41:13.488 00:41:13.488 ' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.488 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:13.489 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:15.393 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:15.393 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:15.393 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:15.393 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:15.393 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:15.393 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:15.393 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:15.393 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:15.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:15.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:15.394 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:15.394 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:15.394 20:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:15.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:15.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:41:15.394 00:41:15.394 --- 10.0.0.2 ping statistics --- 00:41:15.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:15.394 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:15.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:15.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:41:15.394 00:41:15.394 --- 10.0.0.1 ping statistics --- 00:41:15.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:15.394 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:15.394 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3204343 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3204343 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3204343 ']' 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:15.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:15.395 20:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:15.395 [2024-10-13 20:11:05.134171] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:15.395 [2024-10-13 20:11:05.136799] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:41:15.395 [2024-10-13 20:11:05.136897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:15.654 [2024-10-13 20:11:05.278003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:15.654 [2024-10-13 20:11:05.420503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:15.654 [2024-10-13 20:11:05.420583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:15.654 [2024-10-13 20:11:05.420610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:15.654 [2024-10-13 20:11:05.420631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:15.654 [2024-10-13 20:11:05.420653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:15.654 [2024-10-13 20:11:05.423431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:15.654 [2024-10-13 20:11:05.423503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:15.654 [2024-10-13 20:11:05.423595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.654 [2024-10-13 20:11:05.423604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:16.219 [2024-10-13 20:11:05.796540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:16.219 [2024-10-13 20:11:05.805711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:16.219 [2024-10-13 20:11:05.805931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:16.219 [2024-10-13 20:11:05.806820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:16.219 [2024-10-13 20:11:05.807167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 [2024-10-13 20:11:06.116757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 Malloc0 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 [2024-10-13 20:11:06.232944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:16.477 test case1: single bdev can't be used in multiple subsystems 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.477 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.477 [2024-10-13 20:11:06.256588] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:16.478 [2024-10-13 20:11:06.256640] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:16.478 [2024-10-13 20:11:06.256664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.478 request: 00:41:16.478 { 00:41:16.478 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:16.478 "namespace": { 00:41:16.478 "bdev_name": "Malloc0", 00:41:16.478 "no_auto_visible": false 00:41:16.478 }, 00:41:16.478 "method": "nvmf_subsystem_add_ns", 00:41:16.478 "req_id": 1 00:41:16.478 } 00:41:16.478 Got JSON-RPC error response 00:41:16.478 response: 00:41:16.478 { 00:41:16.478 "code": -32602, 00:41:16.478 "message": "Invalid parameters" 00:41:16.478 } 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:16.478 Adding namespace failed - expected result. 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:16.478 test case2: host connect to nvmf target in multiple paths 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.478 [2024-10-13 20:11:06.264702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.478 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:16.736 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:16.993 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:16.993 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:41:16.993 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:41:16.993 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:41:16.993 20:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:41:19.528 20:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:41:19.528 20:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:41:19.528 20:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:41:19.528 20:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:41:19.528 20:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:41:19.528 20:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:41:19.528 20:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:19.529 [global] 00:41:19.529 thread=1 00:41:19.529 invalidate=1 00:41:19.529 rw=write 00:41:19.529 time_based=1 00:41:19.529 runtime=1 00:41:19.529 ioengine=libaio 00:41:19.529 direct=1 00:41:19.529 bs=4096 00:41:19.529 iodepth=1 00:41:19.529 norandommap=0 00:41:19.529 numjobs=1 00:41:19.529 00:41:19.529 verify_dump=1 00:41:19.529 verify_backlog=512 00:41:19.529 verify_state_save=0 00:41:19.529 do_verify=1 00:41:19.529 verify=crc32c-intel 00:41:19.529 [job0] 00:41:19.529 filename=/dev/nvme0n1 00:41:19.529 Could not set queue depth (nvme0n1) 00:41:19.529 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:19.529 fio-3.35 00:41:19.529 Starting 1 thread 00:41:20.462 00:41:20.462 job0: (groupid=0, jobs=1): err= 0: pid=3204943: Sun Oct 13 20:11:10 2024 00:41:20.462 read: IOPS=20, BW=82.7KiB/s (84.7kB/s)(84.0KiB/1016msec) 00:41:20.462 slat (nsec): min=7684, max=17787, avg=14522.52, stdev=1779.45 00:41:20.462 clat (usec): min=40947, max=42083, avg=41509.91, stdev=518.75 00:41:20.462 lat (usec): min=40962, max=42091, avg=41524.44, stdev=518.15 00:41:20.462 clat percentiles (usec): 00:41:20.462 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:20.462 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:41:20.462 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:20.462 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:20.462 | 99.99th=[42206] 00:41:20.462 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:41:20.462 slat (usec): min=6, max=25736, avg=58.37, stdev=1137.04 00:41:20.462 clat (usec): min=191, max=451, avg=213.66, stdev=21.73 00:41:20.462 lat (usec): min=199, max=26027, avg=272.03, stdev=1140.71 00:41:20.462 clat percentiles (usec): 00:41:20.462 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 202], 00:41:20.462 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 212], 00:41:20.462 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 245], 00:41:20.462 | 99.00th=[ 273], 99.50th=[ 388], 99.90th=[ 453], 99.95th=[ 453], 00:41:20.462 | 99.99th=[ 453] 00:41:20.462 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:20.462 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:20.462 lat (usec) : 250=93.81%, 500=2.25% 00:41:20.462 lat (msec) : 50=3.94% 00:41:20.462 cpu : usr=0.10%, sys=0.49%, ctx=537, majf=0, minf=1 00:41:20.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.462 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:20.462 00:41:20.462 Run status group 0 (all jobs): 00:41:20.462 READ: bw=82.7KiB/s (84.7kB/s), 82.7KiB/s-82.7KiB/s (84.7kB/s-84.7kB/s), io=84.0KiB (86.0kB), run=1016-1016msec 00:41:20.462 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:41:20.462 00:41:20.462 Disk stats (read/write): 00:41:20.462 nvme0n1: ios=44/512, merge=0/0, ticks=1734/108, in_queue=1842, util=98.40% 00:41:20.462 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:20.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:20.720 rmmod nvme_tcp 00:41:20.720 rmmod nvme_fabrics 00:41:20.720 rmmod nvme_keyring 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3204343 ']' 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3204343 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3204343 ']' 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3204343 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:20.720 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3204343 00:41:20.978 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:20.979 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:20.979 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3204343' 00:41:20.979 killing process with pid 3204343 00:41:20.979 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3204343 00:41:20.979 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3204343 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:22.354 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.355 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:22.355 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.260 20:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:24.260 00:41:24.260 real 0m11.211s 00:41:24.260 user 0m19.541s 00:41:24.260 sys 0m3.515s 00:41:24.260 20:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:24.260 20:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:24.260 ************************************ 00:41:24.260 END TEST nvmf_nmic 00:41:24.260 ************************************ 00:41:24.260 20:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:24.260 20:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:24.260 20:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:24.260 20:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:24.260 ************************************ 00:41:24.260 START TEST nvmf_fio_target 00:41:24.260 ************************************ 00:41:24.260 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:24.260 * Looking for test storage... 00:41:24.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:24.260 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:24.260 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:41:24.260 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:24.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.519 --rc genhtml_branch_coverage=1 00:41:24.519 --rc genhtml_function_coverage=1 00:41:24.519 --rc genhtml_legend=1 00:41:24.519 --rc geninfo_all_blocks=1 00:41:24.519 --rc geninfo_unexecuted_blocks=1 00:41:24.519 00:41:24.519 ' 00:41:24.519 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:24.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.519 --rc genhtml_branch_coverage=1 00:41:24.519 --rc genhtml_function_coverage=1 00:41:24.520 --rc genhtml_legend=1 00:41:24.520 --rc geninfo_all_blocks=1 00:41:24.520 --rc geninfo_unexecuted_blocks=1 00:41:24.520 00:41:24.520 ' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.520 --rc genhtml_branch_coverage=1 00:41:24.520 --rc genhtml_function_coverage=1 00:41:24.520 --rc genhtml_legend=1 00:41:24.520 --rc geninfo_all_blocks=1 00:41:24.520 --rc geninfo_unexecuted_blocks=1 00:41:24.520 00:41:24.520 ' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.520 --rc genhtml_branch_coverage=1 00:41:24.520 --rc genhtml_function_coverage=1 00:41:24.520 --rc genhtml_legend=1 00:41:24.520 --rc geninfo_all_blocks=1 00:41:24.520 --rc geninfo_unexecuted_blocks=1 00:41:24.520 00:41:24.520 ' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:24.520 20:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:26.429 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:26.430 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:26.430 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:26.430 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:26.430 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:26.430 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:26.688 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:26.688 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:26.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:26.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:41:26.689 00:41:26.689 --- 10.0.0.2 ping statistics --- 00:41:26.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.689 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:26.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:26.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:41:26.689 00:41:26.689 --- 10.0.0.1 ping statistics --- 00:41:26.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.689 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3207177 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3207177 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3207177 ']' 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:26.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:26.689 20:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:26.689 [2024-10-13 20:11:16.474872] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:26.689 [2024-10-13 20:11:16.477262] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:41:26.689 [2024-10-13 20:11:16.477371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:26.949 [2024-10-13 20:11:16.609245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:26.949 [2024-10-13 20:11:16.727587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:26.949 [2024-10-13 20:11:16.727676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:26.949 [2024-10-13 20:11:16.727700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:26.949 [2024-10-13 20:11:16.727717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:26.949 [2024-10-13 20:11:16.727735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:26.949 [2024-10-13 20:11:16.730139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:26.949 [2024-10-13 20:11:16.730202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:26.949 [2024-10-13 20:11:16.730242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.949 [2024-10-13 20:11:16.730267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:27.518 [2024-10-13 20:11:17.054633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:27.518 [2024-10-13 20:11:17.063700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:27.518 [2024-10-13 20:11:17.063866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:27.518 [2024-10-13 20:11:17.064726] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:27.518 [2024-10-13 20:11:17.065086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:27.778 20:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:27.778 20:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:41:27.778 20:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:27.778 20:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:27.778 20:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:27.778 20:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:27.778 20:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:28.037 [2024-10-13 20:11:17.755381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:28.037 20:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:28.609 20:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:28.609 20:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:28.867 20:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:28.867 20:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:29.125 20:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:29.125 20:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:29.384 20:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:29.384 20:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:29.952 20:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:30.212 20:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:30.212 20:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:30.470 20:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:30.470 20:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:30.728 20:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:30.728 20:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:31.294 20:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:31.294 20:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:31.294 20:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:31.552 20:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:31.552 20:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:32.121 20:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:32.121 [2024-10-13 20:11:21.883576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:32.121 20:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:32.380 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:32.949 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:32.949 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:32.949 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:41:32.949 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:41:32.949 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:41:32.949 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:41:32.949 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:41:35.483 20:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:41:35.483 20:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:41:35.483 20:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:41:35.483 20:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:41:35.483 20:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:41:35.483 20:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:41:35.483 20:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:35.483 [global] 00:41:35.483 thread=1 00:41:35.483 invalidate=1 00:41:35.483 rw=write 00:41:35.483 time_based=1 00:41:35.483 runtime=1 00:41:35.483 ioengine=libaio 00:41:35.483 direct=1 00:41:35.483 bs=4096 00:41:35.483 iodepth=1 00:41:35.483 norandommap=0 00:41:35.483 numjobs=1 00:41:35.483 00:41:35.483 verify_dump=1 00:41:35.483 verify_backlog=512 00:41:35.483 verify_state_save=0 00:41:35.483 do_verify=1 00:41:35.483 verify=crc32c-intel 00:41:35.483 [job0] 00:41:35.483 filename=/dev/nvme0n1 00:41:35.483 [job1] 00:41:35.483 filename=/dev/nvme0n2 00:41:35.483 [job2] 00:41:35.483 filename=/dev/nvme0n3 00:41:35.483 [job3] 00:41:35.484 filename=/dev/nvme0n4 00:41:35.484 Could not set queue depth (nvme0n1) 00:41:35.484 Could not set queue depth (nvme0n2) 00:41:35.484 Could not set queue depth (nvme0n3) 00:41:35.484 Could not set queue depth (nvme0n4) 00:41:35.484 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:35.484 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:35.484 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:35.484 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:35.484 fio-3.35 00:41:35.484 Starting 4 threads 00:41:36.421 00:41:36.421 job0: (groupid=0, jobs=1): err= 0: pid=3208371: Sun Oct 13 20:11:26 2024 00:41:36.421 read: IOPS=150, BW=602KiB/s (617kB/s)(624KiB/1036msec) 00:41:36.421 slat (nsec): min=5766, max=34703, avg=8312.78, stdev=4056.21 00:41:36.421 clat (usec): min=284, max=45013, avg=5305.98, stdev=13381.91 00:41:36.421 lat (usec): min=290, max=45030, avg=5314.29, stdev=13383.89 00:41:36.421 clat percentiles (usec): 00:41:36.421 | 1.00th=[ 293], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 297], 00:41:36.421 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:41:36.421 | 70.00th=[ 359], 80.00th=[ 457], 90.00th=[41157], 95.00th=[41157], 00:41:36.421 | 99.00th=[41157], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:41:36.421 | 99.99th=[44827] 00:41:36.421 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:41:36.421 slat (nsec): min=8146, max=49080, avg=15714.79, stdev=6655.08 00:41:36.421 clat (usec): min=215, max=604, avg=383.57, stdev=72.32 00:41:36.421 lat (usec): min=245, max=622, avg=399.29, stdev=72.38 00:41:36.421 clat percentiles (usec): 00:41:36.421 | 1.00th=[ 239], 5.00th=[ 251], 10.00th=[ 265], 20.00th=[ 314], 00:41:36.421 | 30.00th=[ 351], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 412], 00:41:36.421 | 70.00th=[ 424], 80.00th=[ 445], 90.00th=[ 465], 95.00th=[ 482], 00:41:36.421 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 603], 99.95th=[ 603], 00:41:36.421 | 99.99th=[ 603] 00:41:36.421 bw ( KiB/s): min= 4096, max= 4096, per=33.44%, avg=4096.00, stdev= 0.00, samples=1 00:41:36.421 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:36.421 lat (usec) : 250=3.44%, 500=91.02%, 750=2.69% 00:41:36.421 lat (msec) : 50=2.84% 00:41:36.421 cpu : usr=0.10%, sys=1.55%, ctx=670, majf=0, minf=1 00:41:36.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.421 issued rwts: total=156,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:36.421 job1: (groupid=0, jobs=1): err= 0: pid=3208372: Sun Oct 13 20:11:26 2024 00:41:36.421 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:41:36.421 slat (nsec): min=6580, max=14119, avg=13087.86, stdev=1546.90 00:41:36.421 clat (usec): min=40738, max=41052, avg=40970.85, stdev=59.35 00:41:36.421 lat (usec): min=40745, max=41065, avg=40983.94, stdev=60.68 00:41:36.421 clat percentiles (usec): 00:41:36.421 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:36.421 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:36.421 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:36.421 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:36.421 | 99.99th=[41157] 00:41:36.421 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:41:36.421 slat (nsec): min=5610, max=39494, avg=10004.43, stdev=5273.64 00:41:36.421 clat (usec): min=189, max=603, avg=286.79, stdev=83.33 00:41:36.421 lat (usec): min=198, max=612, avg=296.80, stdev=86.19 00:41:36.421 clat percentiles (usec): 00:41:36.421 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 225], 00:41:36.421 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 260], 00:41:36.421 | 70.00th=[ 322], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 449], 00:41:36.421 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 603], 99.95th=[ 603], 00:41:36.421 | 99.99th=[ 603] 00:41:36.421 bw ( KiB/s): min= 4096, max= 4096, per=33.44%, avg=4096.00, stdev= 0.00, samples=1 00:41:36.421 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:36.421 lat (usec) : 250=54.22%, 500=40.90%, 750=0.94% 00:41:36.421 lat (msec) : 50=3.94% 00:41:36.421 cpu : usr=0.20%, sys=0.49%, ctx=533, majf=0, minf=1 00:41:36.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.422 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:36.422 job2: (groupid=0, jobs=1): err= 0: pid=3208373: Sun Oct 13 20:11:26 2024 00:41:36.422 read: IOPS=20, BW=82.8KiB/s (84.7kB/s)(84.0KiB/1015msec) 00:41:36.422 slat (nsec): min=7161, max=26749, avg=13489.29, stdev=3335.27 00:41:36.422 clat (usec): min=40835, max=41054, avg=40976.61, stdev=40.74 00:41:36.422 lat (usec): min=40842, max=41067, avg=40990.10, stdev=41.80 00:41:36.422 clat percentiles (usec): 00:41:36.422 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:36.422 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:36.422 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:36.422 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:36.422 | 99.99th=[41157] 00:41:36.422 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:41:36.422 slat (nsec): min=7178, max=38702, avg=11739.07, stdev=6204.71 00:41:36.422 clat (usec): min=204, max=633, avg=286.08, stdev=86.86 00:41:36.422 lat (usec): min=213, max=663, avg=297.81, stdev=89.84 00:41:36.422 clat percentiles (usec): 00:41:36.422 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:41:36.422 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 253], 00:41:36.422 | 70.00th=[ 281], 80.00th=[ 371], 90.00th=[ 424], 95.00th=[ 461], 00:41:36.422 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 635], 99.95th=[ 635], 00:41:36.422 | 99.99th=[ 635] 00:41:36.422 bw ( KiB/s): min= 4096, max= 4096, per=33.44%, avg=4096.00, stdev= 0.00, samples=1 00:41:36.422 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:36.422 lat (usec) : 250=55.72%, 500=38.27%, 750=2.06% 00:41:36.422 lat (msec) : 50=3.94% 00:41:36.422 cpu : usr=0.49%, sys=0.59%, ctx=533, majf=0, minf=2 00:41:36.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.422 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:36.422 job3: (groupid=0, jobs=1): err= 0: pid=3208374: Sun Oct 13 20:11:26 2024 00:41:36.422 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:41:36.422 slat (nsec): min=4155, max=39343, avg=10002.27, stdev=4239.12 00:41:36.422 clat (usec): min=265, max=627, avg=345.01, stdev=56.95 00:41:36.422 lat (usec): min=269, max=635, avg=355.01, stdev=58.86 00:41:36.422 clat percentiles (usec): 00:41:36.422 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 293], 00:41:36.422 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 375], 00:41:36.422 | 70.00th=[ 379], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 453], 00:41:36.422 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 627], 99.95th=[ 627], 00:41:36.422 | 99.99th=[ 627] 00:41:36.422 write: IOPS=1634, BW=6537KiB/s (6694kB/s)(6544KiB/1001msec); 0 zone resets 00:41:36.422 slat (nsec): min=5547, max=39198, avg=9193.04, stdev=4647.41 00:41:36.422 clat (usec): min=179, max=645, avg=263.58, stdev=96.79 00:41:36.422 lat (usec): min=185, max=674, avg=272.77, stdev=100.01 00:41:36.422 clat percentiles (usec): 00:41:36.422 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 186], 20.00th=[ 190], 00:41:36.422 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 221], 60.00th=[ 227], 00:41:36.422 | 70.00th=[ 251], 80.00th=[ 379], 90.00th=[ 429], 95.00th=[ 453], 00:41:36.422 | 99.00th=[ 502], 99.50th=[ 537], 99.90th=[ 644], 99.95th=[ 644], 00:41:36.422 | 99.99th=[ 644] 00:41:36.422 bw ( KiB/s): min= 8192, max= 8192, per=66.89%, avg=8192.00, stdev= 0.00, samples=1 00:41:36.422 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:36.422 lat (usec) : 250=36.10%, 500=62.26%, 750=1.64% 00:41:36.422 cpu : usr=1.60%, sys=3.20%, ctx=3172, majf=0, minf=1 00:41:36.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.422 issued rwts: total=1536,1636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:36.422 00:41:36.422 Run status group 0 (all jobs): 00:41:36.422 READ: bw=6695KiB/s (6856kB/s), 82.8KiB/s-6138KiB/s (84.7kB/s-6285kB/s), io=6936KiB (7102kB), run=1001-1036msec 00:41:36.422 WRITE: bw=12.0MiB/s (12.5MB/s), 1977KiB/s-6537KiB/s (2024kB/s-6694kB/s), io=12.4MiB (13.0MB), run=1001-1036msec 00:41:36.422 00:41:36.422 Disk stats (read/write): 00:41:36.422 nvme0n1: ios=177/512, merge=0/0, ticks=1604/188, in_queue=1792, util=97.80% 00:41:36.422 nvme0n2: ios=37/512, merge=0/0, ticks=721/144, in_queue=865, util=86.98% 00:41:36.422 nvme0n3: ios=16/512, merge=0/0, ticks=656/143, in_queue=799, util=89.02% 00:41:36.422 nvme0n4: ios=1169/1536, merge=0/0, ticks=400/403, in_queue=803, util=89.67% 00:41:36.422 20:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:36.681 [global] 00:41:36.681 thread=1 00:41:36.681 invalidate=1 00:41:36.681 rw=randwrite 00:41:36.681 time_based=1 00:41:36.681 runtime=1 00:41:36.681 ioengine=libaio 00:41:36.681 direct=1 00:41:36.681 bs=4096 00:41:36.681 iodepth=1 00:41:36.681 norandommap=0 00:41:36.681 numjobs=1 00:41:36.681 00:41:36.681 verify_dump=1 00:41:36.681 verify_backlog=512 00:41:36.681 verify_state_save=0 00:41:36.681 do_verify=1 00:41:36.681 verify=crc32c-intel 00:41:36.681 [job0] 00:41:36.681 filename=/dev/nvme0n1 00:41:36.681 [job1] 00:41:36.681 filename=/dev/nvme0n2 00:41:36.681 [job2] 00:41:36.681 filename=/dev/nvme0n3 00:41:36.681 [job3] 00:41:36.681 filename=/dev/nvme0n4 00:41:36.681 Could not set queue depth (nvme0n1) 00:41:36.681 Could not set queue depth (nvme0n2) 00:41:36.681 Could not set queue depth (nvme0n3) 00:41:36.681 Could not set queue depth (nvme0n4) 00:41:36.681 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.681 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.681 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.681 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.681 fio-3.35 00:41:36.681 Starting 4 threads 00:41:38.057 00:41:38.057 job0: (groupid=0, jobs=1): err= 0: pid=3208594: Sun Oct 13 20:11:27 2024 00:41:38.057 read: IOPS=661, BW=2646KiB/s (2710kB/s)(2744KiB/1037msec) 00:41:38.057 slat (nsec): min=5099, max=40522, avg=13742.92, stdev=6354.03 00:41:38.057 clat (usec): min=253, max=42079, avg=1099.53, stdev=5403.74 00:41:38.057 lat (usec): min=260, max=42092, avg=1113.28, stdev=5403.85 00:41:38.057 clat percentiles (usec): 00:41:38.057 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:41:38.057 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:41:38.057 | 70.00th=[ 371], 80.00th=[ 437], 90.00th=[ 506], 95.00th=[ 553], 00:41:38.057 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:38.057 | 99.99th=[42206] 00:41:38.057 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:41:38.057 slat (nsec): min=6811, max=63322, avg=16231.80, stdev=7626.82 00:41:38.057 clat (usec): min=189, max=469, avg=242.24, stdev=29.53 00:41:38.057 lat (usec): min=198, max=489, avg=258.48, stdev=31.63 00:41:38.057 clat percentiles (usec): 00:41:38.057 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 225], 00:41:38.057 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:41:38.057 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 277], 00:41:38.057 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 449], 99.95th=[ 469], 00:41:38.057 | 99.99th=[ 469] 00:41:38.057 bw ( KiB/s): min= 8192, max= 8192, per=46.09%, avg=8192.00, stdev= 0.00, samples=1 00:41:38.057 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:38.057 lat (usec) : 250=42.28%, 500=53.57%, 750=3.39% 00:41:38.057 lat (msec) : 20=0.06%, 50=0.70% 00:41:38.057 cpu : usr=2.22%, sys=3.19%, ctx=1711, majf=0, minf=2 00:41:38.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.057 issued rwts: total=686,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:38.057 job1: (groupid=0, jobs=1): err= 0: pid=3208595: Sun Oct 13 20:11:27 2024 00:41:38.057 read: IOPS=995, BW=3981KiB/s (4076kB/s)(4116KiB/1034msec) 00:41:38.057 slat (nsec): min=4718, max=57491, avg=13218.34, stdev=8432.12 00:41:38.057 clat (usec): min=261, max=41024, avg=600.12, stdev=2823.85 00:41:38.057 lat (usec): min=269, max=41038, avg=613.33, stdev=2823.85 00:41:38.057 clat percentiles (usec): 00:41:38.057 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 306], 00:41:38.058 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 383], 60.00th=[ 429], 00:41:38.058 | 70.00th=[ 465], 80.00th=[ 502], 90.00th=[ 553], 95.00th=[ 594], 00:41:38.058 | 99.00th=[ 660], 99.50th=[ 914], 99.90th=[41157], 99.95th=[41157], 00:41:38.058 | 99.99th=[41157] 00:41:38.058 write: IOPS=1485, BW=5942KiB/s (6085kB/s)(6144KiB/1034msec); 0 zone resets 00:41:38.058 slat (nsec): min=5785, max=58996, avg=12158.18, stdev=5704.72 00:41:38.058 clat (usec): min=174, max=447, avg=243.90, stdev=51.46 00:41:38.058 lat (usec): min=180, max=462, avg=256.06, stdev=52.52 00:41:38.058 clat percentiles (usec): 00:41:38.058 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 204], 00:41:38.058 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:41:38.058 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 330], 00:41:38.058 | 99.00th=[ 404], 99.50th=[ 437], 99.90th=[ 445], 99.95th=[ 449], 00:41:38.058 | 99.99th=[ 449] 00:41:38.058 bw ( KiB/s): min= 4096, max= 8192, per=34.57%, avg=6144.00, stdev=2896.31, samples=2 00:41:38.058 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:41:38.058 lat (usec) : 250=41.21%, 500=50.14%, 750=8.42%, 1000=0.04% 00:41:38.058 lat (msec) : 50=0.19% 00:41:38.058 cpu : usr=2.13%, sys=3.29%, ctx=2568, majf=0, minf=1 00:41:38.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.058 issued rwts: total=1029,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:38.058 job2: (groupid=0, jobs=1): err= 0: pid=3208598: Sun Oct 13 20:11:27 2024 00:41:38.058 read: IOPS=618, BW=2472KiB/s (2532kB/s)(2544KiB/1029msec) 00:41:38.058 slat (nsec): min=5473, max=30197, avg=7415.00, stdev=3658.48 00:41:38.058 clat (usec): min=275, max=42145, avg=1167.14, stdev=5819.21 00:41:38.058 lat (usec): min=281, max=42152, avg=1174.55, stdev=5820.05 00:41:38.058 clat percentiles (usec): 00:41:38.058 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 297], 00:41:38.058 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:41:38.058 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 424], 95.00th=[ 453], 00:41:38.058 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:38.058 | 99.99th=[42206] 00:41:38.058 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:41:38.058 slat (nsec): min=6776, max=51732, avg=13118.13, stdev=8154.35 00:41:38.058 clat (usec): min=205, max=480, avg=256.72, stdev=45.02 00:41:38.058 lat (usec): min=212, max=518, avg=269.84, stdev=50.26 00:41:38.058 clat percentiles (usec): 00:41:38.058 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:41:38.058 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 243], 60.00th=[ 273], 00:41:38.058 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 326], 00:41:38.058 | 99.00th=[ 416], 99.50th=[ 449], 99.90th=[ 469], 99.95th=[ 482], 00:41:38.058 | 99.99th=[ 482] 00:41:38.058 bw ( KiB/s): min= 4096, max= 4096, per=23.04%, avg=4096.00, stdev= 0.00, samples=2 00:41:38.058 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:41:38.058 lat (usec) : 250=31.63%, 500=67.05%, 750=0.42%, 1000=0.12% 00:41:38.058 lat (msec) : 50=0.78% 00:41:38.058 cpu : usr=1.46%, sys=2.24%, ctx=1661, majf=0, minf=1 00:41:38.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.058 issued rwts: total=636,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:38.058 job3: (groupid=0, jobs=1): err= 0: pid=3208599: Sun Oct 13 20:11:27 2024 00:41:38.058 read: IOPS=926, BW=3704KiB/s (3793kB/s)(3708KiB/1001msec) 00:41:38.058 slat (nsec): min=4835, max=72654, avg=16597.71, stdev=11239.54 00:41:38.058 clat (usec): min=274, max=41223, avg=747.09, stdev=3858.78 00:41:38.058 lat (usec): min=279, max=41238, avg=763.68, stdev=3858.45 00:41:38.058 clat percentiles (usec): 00:41:38.058 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:41:38.058 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 383], 00:41:38.058 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 465], 00:41:38.058 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:38.058 | 99.99th=[41157] 00:41:38.058 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:41:38.058 slat (nsec): min=6399, max=42699, avg=14057.34, stdev=5864.35 00:41:38.058 clat (usec): min=193, max=444, avg=263.51, stdev=35.52 00:41:38.058 lat (usec): min=202, max=453, avg=277.56, stdev=35.29 00:41:38.058 clat percentiles (usec): 00:41:38.058 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:41:38.058 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 273], 00:41:38.058 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 322], 00:41:38.058 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 445], 99.95th=[ 445], 00:41:38.058 | 99.99th=[ 445] 00:41:38.058 bw ( KiB/s): min= 8192, max= 8192, per=46.09%, avg=8192.00, stdev= 0.00, samples=1 00:41:38.058 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:38.058 lat (usec) : 250=23.89%, 500=74.94%, 750=0.72% 00:41:38.058 lat (msec) : 50=0.46% 00:41:38.058 cpu : usr=1.40%, sys=3.20%, ctx=1954, majf=0, minf=1 00:41:38.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.058 issued rwts: total=927,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:38.058 00:41:38.058 Run status group 0 (all jobs): 00:41:38.058 READ: bw=12.3MiB/s (12.9MB/s), 2472KiB/s-3981KiB/s (2532kB/s-4076kB/s), io=12.8MiB (13.4MB), run=1001-1037msec 00:41:38.058 WRITE: bw=17.4MiB/s (18.2MB/s), 3950KiB/s-5942KiB/s (4045kB/s-6085kB/s), io=18.0MiB (18.9MB), run=1001-1037msec 00:41:38.058 00:41:38.058 Disk stats (read/write): 00:41:38.058 nvme0n1: ios=731/1024, merge=0/0, ticks=598/229, in_queue=827, util=91.08% 00:41:38.058 nvme0n2: ios=1074/1536, merge=0/0, ticks=942/355, in_queue=1297, util=98.27% 00:41:38.058 nvme0n3: ios=512/601, merge=0/0, ticks=660/153, in_queue=813, util=88.95% 00:41:38.058 nvme0n4: ios=930/1024, merge=0/0, ticks=1141/255, in_queue=1396, util=97.69% 00:41:38.058 20:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:38.058 [global] 00:41:38.058 thread=1 00:41:38.058 invalidate=1 00:41:38.058 rw=write 00:41:38.058 time_based=1 00:41:38.058 runtime=1 00:41:38.058 ioengine=libaio 00:41:38.058 direct=1 00:41:38.058 bs=4096 00:41:38.058 iodepth=128 00:41:38.058 norandommap=0 00:41:38.058 numjobs=1 00:41:38.058 00:41:38.058 verify_dump=1 00:41:38.058 verify_backlog=512 00:41:38.058 verify_state_save=0 00:41:38.058 do_verify=1 00:41:38.058 verify=crc32c-intel 00:41:38.058 [job0] 00:41:38.058 filename=/dev/nvme0n1 00:41:38.058 [job1] 00:41:38.058 filename=/dev/nvme0n2 00:41:38.058 [job2] 00:41:38.058 filename=/dev/nvme0n3 00:41:38.058 [job3] 00:41:38.058 filename=/dev/nvme0n4 00:41:38.058 Could not set queue depth (nvme0n1) 00:41:38.058 Could not set queue depth (nvme0n2) 00:41:38.058 Could not set queue depth (nvme0n3) 00:41:38.058 Could not set queue depth (nvme0n4) 00:41:38.325 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:38.325 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:38.325 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:38.325 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:38.325 fio-3.35 00:41:38.325 Starting 4 threads 00:41:39.704 00:41:39.704 job0: (groupid=0, jobs=1): err= 0: pid=3208827: Sun Oct 13 20:11:29 2024 00:41:39.704 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:41:39.704 slat (usec): min=2, max=27761, avg=230.55, stdev=1325.50 00:41:39.704 clat (usec): min=8913, max=73118, avg=29994.79, stdev=13317.57 00:41:39.704 lat (usec): min=8931, max=73131, avg=30225.34, stdev=13362.76 00:41:39.704 clat percentiles (usec): 00:41:39.704 | 1.00th=[11207], 5.00th=[13173], 10.00th=[13960], 20.00th=[19530], 00:41:39.704 | 30.00th=[23462], 40.00th=[25297], 50.00th=[28181], 60.00th=[29492], 00:41:39.704 | 70.00th=[31327], 80.00th=[38536], 90.00th=[48497], 95.00th=[57934], 00:41:39.704 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:41:39.704 | 99.99th=[72877] 00:41:39.704 write: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(9.95MiB/1009msec); 0 zone resets 00:41:39.704 slat (usec): min=3, max=16764, avg=199.43, stdev=1068.71 00:41:39.704 clat (usec): min=6955, max=89705, avg=25981.05, stdev=14833.45 00:41:39.704 lat (usec): min=7844, max=89717, avg=26180.48, stdev=14903.23 00:41:39.704 clat percentiles (usec): 00:41:39.704 | 1.00th=[ 7963], 5.00th=[12649], 10.00th=[13566], 20.00th=[13960], 00:41:39.704 | 30.00th=[16909], 40.00th=[21627], 50.00th=[24773], 60.00th=[26608], 00:41:39.704 | 70.00th=[28181], 80.00th=[31327], 90.00th=[37487], 95.00th=[58459], 00:41:39.704 | 99.00th=[85459], 99.50th=[87557], 99.90th=[89654], 99.95th=[89654], 00:41:39.704 | 99.99th=[89654] 00:41:39.704 bw ( KiB/s): min= 7064, max=12288, per=18.73%, avg=9676.00, stdev=3693.93, samples=2 00:41:39.704 iops : min= 1766, max= 3072, avg=2419.00, stdev=923.48, samples=2 00:41:39.704 lat (msec) : 10=1.76%, 20=28.14%, 50=64.20%, 100=5.90% 00:41:39.704 cpu : usr=3.27%, sys=3.57%, ctx=237, majf=0, minf=1 00:41:39.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:41:39.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:39.704 issued rwts: total=2048,2547,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:39.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:39.704 job1: (groupid=0, jobs=1): err= 0: pid=3208828: Sun Oct 13 20:11:29 2024 00:41:39.704 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:41:39.704 slat (usec): min=2, max=25126, avg=159.07, stdev=926.15 00:41:39.704 clat (usec): min=9320, max=61764, avg=19188.06, stdev=6993.15 00:41:39.704 lat (usec): min=9325, max=61775, avg=19347.13, stdev=7085.25 00:41:39.704 clat percentiles (usec): 00:41:39.704 | 1.00th=[10945], 5.00th=[11600], 10.00th=[11994], 20.00th=[13173], 00:41:39.704 | 30.00th=[13829], 40.00th=[16909], 50.00th=[19006], 60.00th=[20055], 00:41:39.704 | 70.00th=[21365], 80.00th=[22414], 90.00th=[26084], 95.00th=[31851], 00:41:39.704 | 99.00th=[43779], 99.50th=[44303], 99.90th=[55313], 99.95th=[55313], 00:41:39.704 | 99.99th=[61604] 00:41:39.704 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1003msec); 0 zone resets 00:41:39.704 slat (usec): min=3, max=19158, avg=136.23, stdev=817.23 00:41:39.704 clat (usec): min=2703, max=74565, avg=19390.81, stdev=9265.82 00:41:39.704 lat (usec): min=3294, max=74582, avg=19527.03, stdev=9324.92 00:41:39.704 clat percentiles (usec): 00:41:39.704 | 1.00th=[ 6718], 5.00th=[10814], 10.00th=[12780], 20.00th=[13304], 00:41:39.704 | 30.00th=[14877], 40.00th=[16712], 50.00th=[17957], 60.00th=[19006], 00:41:39.704 | 70.00th=[20317], 80.00th=[22152], 90.00th=[26608], 95.00th=[33817], 00:41:39.704 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[73925], 00:41:39.704 | 99.99th=[74974] 00:41:39.704 bw ( KiB/s): min=13440, max=13512, per=26.08%, avg=13476.00, stdev=50.91, samples=2 00:41:39.704 iops : min= 3360, max= 3378, avg=3369.00, stdev=12.73, samples=2 00:41:39.704 lat (msec) : 4=0.14%, 10=1.00%, 20=63.26%, 50=33.66%, 100=1.93% 00:41:39.704 cpu : usr=3.89%, sys=7.58%, ctx=301, majf=0, minf=1 00:41:39.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:41:39.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:39.704 issued rwts: total=3072,3496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:39.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:39.704 job2: (groupid=0, jobs=1): err= 0: pid=3208835: Sun Oct 13 20:11:29 2024 00:41:39.704 read: IOPS=3579, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1014msec) 00:41:39.704 slat (usec): min=2, max=13947, avg=109.33, stdev=762.82 00:41:39.704 clat (usec): min=2063, max=65072, avg=16703.65, stdev=5153.23 00:41:39.704 lat (usec): min=2071, max=65078, avg=16812.98, stdev=5172.17 00:41:39.704 clat percentiles (usec): 00:41:39.704 | 1.00th=[ 4555], 5.00th=[ 8356], 10.00th=[10159], 20.00th=[12780], 00:41:39.704 | 30.00th=[13829], 40.00th=[15401], 50.00th=[16909], 60.00th=[17957], 00:41:39.704 | 70.00th=[18744], 80.00th=[20579], 90.00th=[22938], 95.00th=[25297], 00:41:39.704 | 99.00th=[28967], 99.50th=[31065], 99.90th=[51119], 99.95th=[65274], 00:41:39.704 | 99.99th=[65274] 00:41:39.704 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:41:39.704 slat (usec): min=3, max=12669, avg=103.24, stdev=652.51 00:41:39.704 clat (usec): min=551, max=55266, avg=16512.44, stdev=8436.80 00:41:39.704 lat (usec): min=568, max=55270, avg=16615.67, stdev=8489.01 00:41:39.704 clat percentiles (usec): 00:41:39.704 | 1.00th=[ 3490], 5.00th=[ 5669], 10.00th=[ 9372], 20.00th=[10552], 00:41:39.704 | 30.00th=[12387], 40.00th=[14222], 50.00th=[15401], 60.00th=[16057], 00:41:39.704 | 70.00th=[16450], 80.00th=[18744], 90.00th=[27919], 95.00th=[38536], 00:41:39.704 | 99.00th=[41157], 99.50th=[45876], 99.90th=[49021], 99.95th=[49021], 00:41:39.704 | 99.99th=[55313] 00:41:39.704 bw ( KiB/s): min=15464, max=16656, per=31.08%, avg=16060.00, stdev=842.87, samples=2 00:41:39.704 iops : min= 3864, max= 4164, avg=4014.00, stdev=212.13, samples=2 00:41:39.704 lat (usec) : 750=0.03% 00:41:39.704 lat (msec) : 2=0.06%, 4=1.38%, 10=9.46%, 20=69.76%, 50=19.23% 00:41:39.704 lat (msec) : 100=0.06% 00:41:39.704 cpu : usr=2.57%, sys=5.23%, ctx=347, majf=0, minf=1 00:41:39.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:39.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:39.704 issued rwts: total=3630,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:39.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:39.704 job3: (groupid=0, jobs=1): err= 0: pid=3208836: Sun Oct 13 20:11:29 2024 00:41:39.704 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec) 00:41:39.704 slat (usec): min=2, max=30275, avg=155.73, stdev=1300.71 00:41:39.704 clat (usec): min=869, max=97208, avg=22213.04, stdev=16338.78 00:41:39.704 lat (usec): min=878, max=97210, avg=22368.77, stdev=16412.49 00:41:39.704 clat percentiles (usec): 00:41:39.704 | 1.00th=[ 5866], 5.00th=[ 9241], 10.00th=[11207], 20.00th=[14484], 00:41:39.704 | 30.00th=[15533], 40.00th=[15926], 50.00th=[17433], 60.00th=[19530], 00:41:39.704 | 70.00th=[21365], 80.00th=[24249], 90.00th=[36963], 95.00th=[54789], 00:41:39.704 | 99.00th=[89654], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], 00:41:39.704 | 99.99th=[96994] 00:41:39.704 write: IOPS=2947, BW=11.5MiB/s (12.1MB/s)(11.7MiB/1017msec); 0 zone resets 00:41:39.704 slat (usec): min=3, max=16498, avg=175.59, stdev=932.63 00:41:39.704 clat (usec): min=660, max=72426, avg=23215.03, stdev=14727.93 00:41:39.704 lat (usec): min=1590, max=72442, avg=23390.62, stdev=14827.36 00:41:39.704 clat percentiles (usec): 00:41:39.704 | 1.00th=[ 5145], 5.00th=[ 6456], 10.00th=[11076], 20.00th=[13435], 00:41:39.704 | 30.00th=[15139], 40.00th=[15664], 50.00th=[19006], 60.00th=[19792], 00:41:39.704 | 70.00th=[23462], 80.00th=[31851], 90.00th=[43254], 95.00th=[63177], 00:41:39.704 | 99.00th=[69731], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:41:39.704 | 99.99th=[72877] 00:41:39.704 bw ( KiB/s): min= 8192, max=14768, per=22.22%, avg=11480.00, stdev=4649.93, samples=2 00:41:39.704 iops : min= 2048, max= 3692, avg=2870.00, stdev=1162.48, samples=2 00:41:39.704 lat (usec) : 750=0.02%, 1000=0.07% 00:41:39.704 lat (msec) : 2=0.02%, 10=6.89%, 20=56.41%, 50=29.90%, 100=6.69% 00:41:39.704 cpu : usr=2.26%, sys=3.74%, ctx=269, majf=0, minf=1 00:41:39.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:41:39.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:39.704 issued rwts: total=2560,2998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:39.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:39.704 00:41:39.704 Run status group 0 (all jobs): 00:41:39.705 READ: bw=43.4MiB/s (45.6MB/s), 8119KiB/s-14.0MiB/s (8314kB/s-14.7MB/s), io=44.2MiB (46.3MB), run=1003-1017msec 00:41:39.705 WRITE: bw=50.5MiB/s (52.9MB/s), 9.86MiB/s-15.8MiB/s (10.3MB/s-16.5MB/s), io=51.3MiB (53.8MB), run=1003-1017msec 00:41:39.705 00:41:39.705 Disk stats (read/write): 00:41:39.705 nvme0n1: ios=2089/2133, merge=0/0, ticks=16179/12009, in_queue=28188, util=97.70% 00:41:39.705 nvme0n2: ios=2524/2560, merge=0/0, ticks=17494/16931, in_queue=34425, util=84.97% 00:41:39.705 nvme0n3: ios=3118/3393, merge=0/0, ticks=35586/45237, in_queue=80823, util=99.90% 00:41:39.705 nvme0n4: ios=2048/2532, merge=0/0, ticks=24533/26138, in_queue=50671, util=89.58% 00:41:39.705 20:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:39.705 [global] 00:41:39.705 thread=1 00:41:39.705 invalidate=1 00:41:39.705 rw=randwrite 00:41:39.705 time_based=1 00:41:39.705 runtime=1 00:41:39.705 ioengine=libaio 00:41:39.705 direct=1 00:41:39.705 bs=4096 00:41:39.705 iodepth=128 00:41:39.705 norandommap=0 00:41:39.705 numjobs=1 00:41:39.705 00:41:39.705 verify_dump=1 00:41:39.705 verify_backlog=512 00:41:39.705 verify_state_save=0 00:41:39.705 do_verify=1 00:41:39.705 verify=crc32c-intel 00:41:39.705 [job0] 00:41:39.705 filename=/dev/nvme0n1 00:41:39.705 [job1] 00:41:39.705 filename=/dev/nvme0n2 00:41:39.705 [job2] 00:41:39.705 filename=/dev/nvme0n3 00:41:39.705 [job3] 00:41:39.705 filename=/dev/nvme0n4 00:41:39.705 Could not set queue depth (nvme0n1) 00:41:39.705 Could not set queue depth (nvme0n2) 00:41:39.705 Could not set queue depth (nvme0n3) 00:41:39.705 Could not set queue depth (nvme0n4) 00:41:39.705 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:39.705 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:39.705 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:39.705 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:39.705 fio-3.35 00:41:39.705 Starting 4 threads 00:41:41.080 00:41:41.080 job0: (groupid=0, jobs=1): err= 0: pid=3209092: Sun Oct 13 20:11:30 2024 00:41:41.080 read: IOPS=3857, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1003msec) 00:41:41.080 slat (usec): min=2, max=9857, avg=114.73, stdev=672.48 00:41:41.080 clat (usec): min=2409, max=29522, avg=14692.96, stdev=2989.15 00:41:41.080 lat (usec): min=5976, max=30289, avg=14807.68, stdev=3017.96 00:41:41.080 clat percentiles (usec): 00:41:41.080 | 1.00th=[ 5997], 5.00th=[10028], 10.00th=[12256], 20.00th=[12780], 00:41:41.080 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14615], 60.00th=[14877], 00:41:41.080 | 70.00th=[15664], 80.00th=[16581], 90.00th=[18482], 95.00th=[19792], 00:41:41.080 | 99.00th=[22414], 99.50th=[22414], 99.90th=[26084], 99.95th=[28443], 00:41:41.080 | 99.99th=[29492] 00:41:41.080 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:41:41.080 slat (usec): min=3, max=76499, avg=127.83, stdev=1402.14 00:41:41.080 clat (usec): min=1003, max=118764, avg=17186.96, stdev=14637.23 00:41:41.080 lat (usec): min=1015, max=121772, avg=17314.79, stdev=14726.48 00:41:41.080 clat percentiles (msec): 00:41:41.080 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:41:41.081 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:41:41.081 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 19], 95.00th=[ 22], 00:41:41.081 | 99.00th=[ 101], 99.50th=[ 115], 99.90th=[ 120], 99.95th=[ 120], 00:41:41.081 | 99.99th=[ 120] 00:41:41.081 bw ( KiB/s): min=15800, max=16968, per=26.91%, avg=16384.00, stdev=825.90, samples=2 00:41:41.081 iops : min= 3950, max= 4242, avg=4096.00, stdev=206.48, samples=2 00:41:41.081 lat (msec) : 2=0.03%, 4=0.01%, 10=4.65%, 20=89.23%, 50=4.49% 00:41:41.081 lat (msec) : 100=1.02%, 250=0.58% 00:41:41.081 cpu : usr=1.80%, sys=3.39%, ctx=330, majf=0, minf=1 00:41:41.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:41.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:41.081 issued rwts: total=3869,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:41.081 job1: (groupid=0, jobs=1): err= 0: pid=3209109: Sun Oct 13 20:11:30 2024 00:41:41.081 read: IOPS=3334, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1005msec) 00:41:41.081 slat (usec): min=2, max=16451, avg=130.08, stdev=838.60 00:41:41.081 clat (usec): min=2700, max=44761, avg=16613.19, stdev=5022.72 00:41:41.081 lat (usec): min=5073, max=44766, avg=16743.27, stdev=5074.92 00:41:41.081 clat percentiles (usec): 00:41:41.081 | 1.00th=[ 8160], 5.00th=[11863], 10.00th=[13435], 20.00th=[13960], 00:41:41.081 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15139], 60.00th=[15533], 00:41:41.081 | 70.00th=[16057], 80.00th=[18482], 90.00th=[22414], 95.00th=[27919], 00:41:41.081 | 99.00th=[36963], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:41:41.081 | 99.99th=[44827] 00:41:41.081 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:41:41.081 slat (usec): min=3, max=23090, avg=152.24, stdev=1114.71 00:41:41.081 clat (usec): min=4906, max=56401, avg=19914.56, stdev=9338.83 00:41:41.081 lat (usec): min=4912, max=56408, avg=20066.80, stdev=9430.31 00:41:41.081 clat percentiles (usec): 00:41:41.081 | 1.00th=[ 7504], 5.00th=[10945], 10.00th=[13042], 20.00th=[14091], 00:41:41.081 | 30.00th=[14353], 40.00th=[14877], 50.00th=[16319], 60.00th=[17433], 00:41:41.081 | 70.00th=[20317], 80.00th=[25035], 90.00th=[32900], 95.00th=[43254], 00:41:41.081 | 99.00th=[47973], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:41:41.081 | 99.99th=[56361] 00:41:41.081 bw ( KiB/s): min=12344, max=16328, per=23.55%, avg=14336.00, stdev=2817.11, samples=2 00:41:41.081 iops : min= 3086, max= 4082, avg=3584.00, stdev=704.28, samples=2 00:41:41.081 lat (msec) : 4=0.01%, 10=2.38%, 20=75.56%, 50=21.73%, 100=0.32% 00:41:41.081 cpu : usr=2.59%, sys=4.28%, ctx=287, majf=0, minf=1 00:41:41.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:41:41.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:41.081 issued rwts: total=3351,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:41.081 job2: (groupid=0, jobs=1): err= 0: pid=3209134: Sun Oct 13 20:11:30 2024 00:41:41.081 read: IOPS=3687, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1006msec) 00:41:41.081 slat (usec): min=2, max=15504, avg=125.16, stdev=981.44 00:41:41.081 clat (usec): min=1335, max=51281, avg=16909.96, stdev=5002.08 00:41:41.081 lat (usec): min=2478, max=51286, avg=17035.12, stdev=5037.42 00:41:41.081 clat percentiles (usec): 00:41:41.081 | 1.00th=[ 4621], 5.00th=[ 9110], 10.00th=[10814], 20.00th=[14353], 00:41:41.081 | 30.00th=[15664], 40.00th=[16319], 50.00th=[16581], 60.00th=[16909], 00:41:41.081 | 70.00th=[17171], 80.00th=[19792], 90.00th=[23200], 95.00th=[26084], 00:41:41.081 | 99.00th=[31589], 99.50th=[32900], 99.90th=[46924], 99.95th=[46924], 00:41:41.081 | 99.99th=[51119] 00:41:41.081 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:41:41.081 slat (usec): min=3, max=14431, avg=115.98, stdev=710.53 00:41:41.081 clat (usec): min=3319, max=33977, avg=15861.49, stdev=4263.82 00:41:41.081 lat (usec): min=3327, max=33986, avg=15977.48, stdev=4291.42 00:41:41.081 clat percentiles (usec): 00:41:41.081 | 1.00th=[ 4817], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[12780], 00:41:41.081 | 30.00th=[14615], 40.00th=[15270], 50.00th=[16057], 60.00th=[17695], 00:41:41.081 | 70.00th=[17957], 80.00th=[18482], 90.00th=[20579], 95.00th=[21627], 00:41:41.081 | 99.00th=[27919], 99.50th=[28181], 99.90th=[31589], 99.95th=[31851], 00:41:41.081 | 99.99th=[33817] 00:41:41.081 bw ( KiB/s): min=16128, max=16657, per=26.92%, avg=16392.50, stdev=374.06, samples=2 00:41:41.081 iops : min= 4032, max= 4164, avg=4098.00, stdev=93.34, samples=2 00:41:41.081 lat (msec) : 2=0.01%, 4=0.60%, 10=8.06%, 20=76.81%, 50=14.50% 00:41:41.081 lat (msec) : 100=0.01% 00:41:41.081 cpu : usr=2.39%, sys=4.78%, ctx=389, majf=0, minf=1 00:41:41.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:41.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:41.081 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:41.081 job3: (groupid=0, jobs=1): err= 0: pid=3209144: Sun Oct 13 20:11:30 2024 00:41:41.081 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:41:41.081 slat (usec): min=2, max=13336, avg=134.98, stdev=740.75 00:41:41.081 clat (usec): min=7920, max=67255, avg=17596.51, stdev=6097.51 00:41:41.081 lat (usec): min=7923, max=67259, avg=17731.49, stdev=6105.89 00:41:41.081 clat percentiles (usec): 00:41:41.081 | 1.00th=[ 8029], 5.00th=[11207], 10.00th=[12256], 20.00th=[14615], 00:41:41.081 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16319], 60.00th=[16712], 00:41:41.081 | 70.00th=[16909], 80.00th=[17957], 90.00th=[27395], 95.00th=[31851], 00:41:41.081 | 99.00th=[39584], 99.50th=[44303], 99.90th=[66323], 99.95th=[66323], 00:41:41.081 | 99.99th=[67634] 00:41:41.081 write: IOPS=3521, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1004msec); 0 zone resets 00:41:41.081 slat (usec): min=3, max=30025, avg=161.56, stdev=1044.10 00:41:41.081 clat (usec): min=3261, max=93000, avg=20590.41, stdev=15404.81 00:41:41.081 lat (usec): min=4355, max=93006, avg=20751.97, stdev=15494.00 00:41:41.081 clat percentiles (usec): 00:41:41.081 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[11731], 20.00th=[15270], 00:41:41.081 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15926], 60.00th=[16450], 00:41:41.081 | 70.00th=[16909], 80.00th=[20317], 90.00th=[34866], 95.00th=[45876], 00:41:41.081 | 99.00th=[92799], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:41:41.081 | 99.99th=[92799] 00:41:41.081 bw ( KiB/s): min=12288, max=14984, per=22.40%, avg=13636.00, stdev=1906.36, samples=2 00:41:41.081 iops : min= 3072, max= 3746, avg=3409.00, stdev=476.59, samples=2 00:41:41.081 lat (msec) : 4=0.02%, 10=5.07%, 20=75.86%, 50=16.43%, 100=2.62% 00:41:41.081 cpu : usr=1.10%, sys=3.39%, ctx=356, majf=0, minf=1 00:41:41.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:41:41.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:41.081 issued rwts: total=3072,3536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:41.081 00:41:41.081 Run status group 0 (all jobs): 00:41:41.081 READ: bw=54.4MiB/s (57.0MB/s), 12.0MiB/s-15.1MiB/s (12.5MB/s-15.8MB/s), io=54.7MiB (57.4MB), run=1003-1006msec 00:41:41.081 WRITE: bw=59.5MiB/s (62.3MB/s), 13.8MiB/s-16.0MiB/s (14.4MB/s-16.7MB/s), io=59.8MiB (62.7MB), run=1003-1006msec 00:41:41.081 00:41:41.081 Disk stats (read/write): 00:41:41.081 nvme0n1: ios=3122/3567, merge=0/0, ticks=23937/33049, in_queue=56986, util=98.30% 00:41:41.081 nvme0n2: ios=2604/3055, merge=0/0, ticks=31055/47758, in_queue=78813, util=98.78% 00:41:41.081 nvme0n3: ios=3091/3583, merge=0/0, ticks=47273/49467, in_queue=96740, util=98.12% 00:41:41.081 nvme0n4: ios=2617/2846, merge=0/0, ticks=19094/27563, in_queue=46657, util=96.84% 00:41:41.081 20:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:41.081 20:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3209315 00:41:41.081 20:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:41.081 20:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:41.081 [global] 00:41:41.081 thread=1 00:41:41.081 invalidate=1 00:41:41.081 rw=read 00:41:41.081 time_based=1 00:41:41.081 runtime=10 00:41:41.081 ioengine=libaio 00:41:41.081 direct=1 00:41:41.081 bs=4096 00:41:41.081 iodepth=1 00:41:41.081 norandommap=1 00:41:41.081 numjobs=1 00:41:41.081 00:41:41.081 [job0] 00:41:41.081 filename=/dev/nvme0n1 00:41:41.081 [job1] 00:41:41.081 filename=/dev/nvme0n2 00:41:41.081 [job2] 00:41:41.081 filename=/dev/nvme0n3 00:41:41.081 [job3] 00:41:41.081 filename=/dev/nvme0n4 00:41:41.081 Could not set queue depth (nvme0n1) 00:41:41.081 Could not set queue depth (nvme0n2) 00:41:41.081 Could not set queue depth (nvme0n3) 00:41:41.081 Could not set queue depth (nvme0n4) 00:41:41.081 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:41.081 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:41.081 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:41.082 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:41.082 fio-3.35 00:41:41.082 Starting 4 threads 00:41:44.368 20:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:44.368 20:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:44.368 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=290816, buflen=4096 00:41:44.368 fio: pid=3209407, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:44.627 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:44.627 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:44.627 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=835584, buflen=4096 00:41:44.627 fio: pid=3209406, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:44.885 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=24535040, buflen=4096 00:41:44.885 fio: pid=3209404, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:44.885 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:44.885 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:45.144 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=385024, buflen=4096 00:41:45.144 fio: pid=3209405, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:41:45.408 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:45.408 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:45.408 00:41:45.408 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3209404: Sun Oct 13 20:11:34 2024 00:41:45.408 read: IOPS=1684, BW=6736KiB/s (6898kB/s)(23.4MiB/3557msec) 00:41:45.408 slat (usec): min=5, max=25591, avg=16.83, stdev=353.95 00:41:45.408 clat (usec): min=221, max=41553, avg=569.25, stdev=3187.92 00:41:45.408 lat (usec): min=226, max=49034, avg=586.08, stdev=3224.06 00:41:45.408 clat percentiles (usec): 00:41:45.408 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:41:45.408 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:41:45.408 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 461], 00:41:45.408 | 99.00th=[ 611], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:45.408 | 99.99th=[41681] 00:41:45.408 bw ( KiB/s): min= 96, max=12408, per=100.00%, avg=7037.33, stdev=5339.25, samples=6 00:41:45.408 iops : min= 24, max= 3102, avg=1759.33, stdev=1334.81, samples=6 00:41:45.408 lat (usec) : 250=0.27%, 500=97.35%, 750=1.74%, 1000=0.02% 00:41:45.408 lat (msec) : 50=0.62% 00:41:45.408 cpu : usr=1.29%, sys=2.70%, ctx=5994, majf=0, minf=1 00:41:45.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:45.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.408 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.408 issued rwts: total=5991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:45.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:45.408 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3209405: Sun Oct 13 20:11:34 2024 00:41:45.408 read: IOPS=24, BW=95.8KiB/s (98.1kB/s)(376KiB/3925msec) 00:41:45.408 slat (usec): min=11, max=14846, avg=371.09, stdev=1868.75 00:41:45.408 clat (usec): min=625, max=79587, avg=41366.35, stdev=6991.63 00:41:45.408 lat (usec): min=650, max=79600, avg=41665.15, stdev=7193.20 00:41:45.408 clat percentiles (usec): 00:41:45.408 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:45.408 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:45.408 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:45.408 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:41:45.408 | 99.99th=[79168] 00:41:45.408 bw ( KiB/s): min= 86, max= 104, per=1.48%, avg=96.86, stdev= 6.09, samples=7 00:41:45.408 iops : min= 21, max= 26, avg=24.14, stdev= 1.68, samples=7 00:41:45.408 lat (usec) : 750=1.05% 00:41:45.408 lat (msec) : 50=95.79%, 100=2.11% 00:41:45.408 cpu : usr=0.00%, sys=0.25%, ctx=98, majf=0, minf=2 00:41:45.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:45.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.408 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.408 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:45.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:45.408 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3209406: Sun Oct 13 20:11:34 2024 00:41:45.408 read: IOPS=62, BW=247KiB/s (253kB/s)(816KiB/3300msec) 00:41:45.408 slat (usec): min=7, max=8870, avg=60.42, stdev=618.38 00:41:45.408 clat (usec): min=310, max=41593, avg=15995.11, stdev=19726.17 00:41:45.408 lat (usec): min=322, max=49981, avg=16055.75, stdev=19790.42 00:41:45.408 clat percentiles (usec): 00:41:45.408 | 1.00th=[ 314], 5.00th=[ 383], 10.00th=[ 408], 20.00th=[ 510], 00:41:45.408 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 652], 00:41:45.408 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:45.408 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:45.408 | 99.99th=[41681] 00:41:45.408 bw ( KiB/s): min= 200, max= 328, per=3.95%, avg=256.00, stdev=54.96, samples=6 00:41:45.408 iops : min= 50, max= 82, avg=64.00, stdev=13.74, samples=6 00:41:45.409 lat (usec) : 500=19.02%, 750=41.95% 00:41:45.409 lat (msec) : 2=0.49%, 50=38.05% 00:41:45.409 cpu : usr=0.06%, sys=0.15%, ctx=206, majf=0, minf=2 00:41:45.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:45.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.409 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.409 issued rwts: total=205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:45.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:45.409 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3209407: Sun Oct 13 20:11:34 2024 00:41:45.409 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(284KiB/2961msec) 00:41:45.409 slat (nsec): min=12061, max=35550, avg=16113.21, stdev=5159.04 00:41:45.409 clat (usec): min=464, max=45625, avg=41359.22, stdev=4956.14 00:41:45.409 lat (usec): min=496, max=45641, avg=41375.33, stdev=4954.30 00:41:45.409 clat percentiles (usec): 00:41:45.409 | 1.00th=[ 465], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:41:45.409 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:41:45.409 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:45.409 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:41:45.409 | 99.99th=[45876] 00:41:45.409 bw ( KiB/s): min= 96, max= 96, per=1.48%, avg=96.00, stdev= 0.00, samples=5 00:41:45.409 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:41:45.409 lat (usec) : 500=1.39% 00:41:45.409 lat (msec) : 50=97.22% 00:41:45.409 cpu : usr=0.07%, sys=0.00%, ctx=72, majf=0, minf=1 00:41:45.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:45.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.409 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.409 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:45.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:45.409 00:41:45.409 Run status group 0 (all jobs): 00:41:45.409 READ: bw=6481KiB/s (6636kB/s), 95.8KiB/s-6736KiB/s (98.1kB/s-6898kB/s), io=24.8MiB (26.0MB), run=2961-3925msec 00:41:45.409 00:41:45.409 Disk stats (read/write): 00:41:45.409 nvme0n1: ios=5453/0, merge=0/0, ticks=3198/0, in_queue=3198, util=94.97% 00:41:45.409 nvme0n2: ios=93/0, merge=0/0, ticks=3811/0, in_queue=3811, util=96.23% 00:41:45.409 nvme0n3: ios=198/0, merge=0/0, ticks=3097/0, in_queue=3097, util=96.79% 00:41:45.409 nvme0n4: ios=69/0, merge=0/0, ticks=2851/0, in_queue=2851, util=96.75% 00:41:45.715 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:45.715 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:46.000 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:46.000 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:46.258 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:46.258 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:46.516 20:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:46.516 20:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:47.082 20:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:47.082 20:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3209315 00:41:47.082 20:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:47.082 20:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:47.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:47.648 nvmf hotplug test: fio failed as expected 00:41:47.648 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:47.906 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:48.166 rmmod nvme_tcp 00:41:48.166 rmmod nvme_fabrics 00:41:48.166 rmmod nvme_keyring 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3207177 ']' 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3207177 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3207177 ']' 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3207177 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3207177 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3207177' 00:41:48.166 killing process with pid 3207177 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3207177 00:41:48.166 20:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3207177 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:49.546 20:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:51.453 00:41:51.453 real 0m27.020s 00:41:51.453 user 1m13.783s 00:41:51.453 sys 0m9.994s 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.453 ************************************ 00:41:51.453 END TEST nvmf_fio_target 00:41:51.453 ************************************ 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:51.453 ************************************ 00:41:51.453 START TEST nvmf_bdevio 00:41:51.453 ************************************ 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:51.453 * Looking for test storage... 00:41:51.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:51.453 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:51.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.454 --rc genhtml_branch_coverage=1 00:41:51.454 --rc genhtml_function_coverage=1 00:41:51.454 --rc genhtml_legend=1 00:41:51.454 --rc geninfo_all_blocks=1 00:41:51.454 --rc geninfo_unexecuted_blocks=1 00:41:51.454 00:41:51.454 ' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:51.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.454 --rc genhtml_branch_coverage=1 00:41:51.454 --rc genhtml_function_coverage=1 00:41:51.454 --rc genhtml_legend=1 00:41:51.454 --rc geninfo_all_blocks=1 00:41:51.454 --rc geninfo_unexecuted_blocks=1 00:41:51.454 00:41:51.454 ' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:51.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.454 --rc genhtml_branch_coverage=1 00:41:51.454 --rc genhtml_function_coverage=1 00:41:51.454 --rc genhtml_legend=1 00:41:51.454 --rc geninfo_all_blocks=1 00:41:51.454 --rc geninfo_unexecuted_blocks=1 00:41:51.454 00:41:51.454 ' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:51.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.454 --rc genhtml_branch_coverage=1 00:41:51.454 --rc genhtml_function_coverage=1 00:41:51.454 --rc genhtml_legend=1 00:41:51.454 --rc geninfo_all_blocks=1 00:41:51.454 --rc geninfo_unexecuted_blocks=1 00:41:51.454 00:41:51.454 ' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:51.454 20:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:53.357 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:53.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:53.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:53.358 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:53.358 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:53.358 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:53.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:53.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:41:53.616 00:41:53.616 --- 10.0.0.2 ping statistics --- 00:41:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.616 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:53.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:53.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:41:53.616 00:41:53.616 --- 10.0.0.1 ping statistics --- 00:41:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.616 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:53.616 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3212310 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3212310 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3212310 ']' 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:53.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:53.617 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:53.617 [2024-10-13 20:11:43.401561] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:53.617 [2024-10-13 20:11:43.404129] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:41:53.617 [2024-10-13 20:11:43.404233] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:53.876 [2024-10-13 20:11:43.549508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:54.135 [2024-10-13 20:11:43.695761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:54.135 [2024-10-13 20:11:43.695841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:54.135 [2024-10-13 20:11:43.695871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:54.135 [2024-10-13 20:11:43.695893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:54.136 [2024-10-13 20:11:43.695916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:54.136 [2024-10-13 20:11:43.699042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:54.136 [2024-10-13 20:11:43.699121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:41:54.136 [2024-10-13 20:11:43.699167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:54.136 [2024-10-13 20:11:43.699179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:41:54.394 [2024-10-13 20:11:44.074832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:54.394 [2024-10-13 20:11:44.087708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:54.394 [2024-10-13 20:11:44.087900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:54.394 [2024-10-13 20:11:44.088739] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:54.394 [2024-10-13 20:11:44.089098] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:54.653 [2024-10-13 20:11:44.444297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.653 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:54.912 Malloc0 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:54.913 [2024-10-13 20:11:44.552549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:54.913 { 00:41:54.913 "params": { 00:41:54.913 "name": "Nvme$subsystem", 00:41:54.913 "trtype": "$TEST_TRANSPORT", 00:41:54.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:54.913 "adrfam": "ipv4", 00:41:54.913 "trsvcid": "$NVMF_PORT", 00:41:54.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:54.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:54.913 "hdgst": ${hdgst:-false}, 00:41:54.913 "ddgst": ${ddgst:-false} 00:41:54.913 }, 00:41:54.913 "method": "bdev_nvme_attach_controller" 00:41:54.913 } 00:41:54.913 EOF 00:41:54.913 )") 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:41:54.913 20:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:54.913 "params": { 00:41:54.913 "name": "Nvme1", 00:41:54.913 "trtype": "tcp", 00:41:54.913 "traddr": "10.0.0.2", 00:41:54.913 "adrfam": "ipv4", 00:41:54.913 "trsvcid": "4420", 00:41:54.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:54.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:54.913 "hdgst": false, 00:41:54.913 "ddgst": false 00:41:54.913 }, 00:41:54.913 "method": "bdev_nvme_attach_controller" 00:41:54.913 }' 00:41:54.913 [2024-10-13 20:11:44.638303] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:41:54.913 [2024-10-13 20:11:44.638466] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212466 ] 00:41:55.172 [2024-10-13 20:11:44.764435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:55.172 [2024-10-13 20:11:44.896442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:55.172 [2024-10-13 20:11:44.896469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:55.172 [2024-10-13 20:11:44.896473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:55.739 I/O targets: 00:41:55.739 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:55.739 00:41:55.739 00:41:55.739 CUnit - A unit testing framework for C - Version 2.1-3 00:41:55.739 http://cunit.sourceforge.net/ 00:41:55.739 00:41:55.739 00:41:55.739 Suite: bdevio tests on: Nvme1n1 00:41:55.739 Test: blockdev write read block ...passed 00:41:55.998 Test: blockdev write zeroes read block ...passed 00:41:55.998 Test: blockdev write zeroes read no split ...passed 00:41:55.998 Test: blockdev write zeroes read split ...passed 00:41:55.998 Test: blockdev write zeroes read split partial ...passed 00:41:55.998 Test: blockdev reset ...[2024-10-13 20:11:45.676255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:55.998 [2024-10-13 20:11:45.676467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:41:55.998 [2024-10-13 20:11:45.810943] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:55.998 passed 00:41:56.257 Test: blockdev write read 8 blocks ...passed 00:41:56.257 Test: blockdev write read size > 128k ...passed 00:41:56.257 Test: blockdev write read invalid size ...passed 00:41:56.257 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:56.257 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:56.257 Test: blockdev write read max offset ...passed 00:41:56.257 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:56.257 Test: blockdev writev readv 8 blocks ...passed 00:41:56.257 Test: blockdev writev readv 30 x 1block ...passed 00:41:56.257 Test: blockdev writev readv block ...passed 00:41:56.257 Test: blockdev writev readv size > 128k ...passed 00:41:56.257 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:56.257 Test: blockdev comparev and writev ...[2024-10-13 20:11:46.025740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:56.257 [2024-10-13 20:11:46.025799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:56.257 [2024-10-13 20:11:46.025838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:56.257 [2024-10-13 20:11:46.025866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:56.257 [2024-10-13 20:11:46.026472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:56.257 [2024-10-13 20:11:46.026517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:56.257 [2024-10-13 20:11:46.026559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:56.257 [2024-10-13 20:11:46.026585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:56.258 [2024-10-13 20:11:46.027135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:56.258 [2024-10-13 20:11:46.027168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:56.258 [2024-10-13 20:11:46.027205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:56.258 [2024-10-13 20:11:46.027232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:56.258 [2024-10-13 20:11:46.027812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:56.258 [2024-10-13 20:11:46.027844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:56.258 [2024-10-13 20:11:46.027876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:56.258 [2024-10-13 20:11:46.027900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:56.258 passed 00:41:56.516 Test: blockdev nvme passthru rw ...passed 00:41:56.516 Test: blockdev nvme passthru vendor specific ...[2024-10-13 20:11:46.109764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:56.516 [2024-10-13 20:11:46.109810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:56.516 [2024-10-13 20:11:46.110067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:56.516 [2024-10-13 20:11:46.110098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:56.516 [2024-10-13 20:11:46.110302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:56.516 [2024-10-13 20:11:46.110335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:56.516 [2024-10-13 20:11:46.110551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:56.516 [2024-10-13 20:11:46.110583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:56.516 passed 00:41:56.516 Test: blockdev nvme admin passthru ...passed 00:41:56.516 Test: blockdev copy ...passed 00:41:56.516 00:41:56.516 Run Summary: Type Total Ran Passed Failed Inactive 00:41:56.516 suites 1 1 n/a 0 0 00:41:56.516 tests 23 23 23 0 0 00:41:56.516 asserts 152 152 152 0 n/a 00:41:56.516 00:41:56.516 Elapsed time = 1.423 seconds 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:57.451 20:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:57.451 rmmod nvme_tcp 00:41:57.451 rmmod nvme_fabrics 00:41:57.451 rmmod nvme_keyring 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3212310 ']' 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3212310 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3212310 ']' 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3212310 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3212310 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3212310' 00:41:57.451 killing process with pid 3212310 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3212310 00:41:57.451 20:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3212310 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:58.828 20:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:00.739 20:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:00.739 00:42:00.739 real 0m9.419s 00:42:00.739 user 0m17.563s 00:42:00.739 sys 0m3.115s 00:42:00.739 20:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:00.739 20:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:00.739 ************************************ 00:42:00.739 END TEST nvmf_bdevio 00:42:00.739 ************************************ 00:42:00.739 20:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:00.739 00:42:00.739 real 4m29.001s 00:42:00.739 user 9m50.607s 00:42:00.739 sys 1m28.645s 00:42:00.739 20:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:00.739 20:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:00.739 ************************************ 00:42:00.739 END TEST nvmf_target_core_interrupt_mode 00:42:00.739 ************************************ 00:42:00.739 20:11:50 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:00.739 20:11:50 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:00.739 20:11:50 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:00.739 20:11:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:00.998 ************************************ 00:42:00.998 START TEST nvmf_interrupt 00:42:00.998 ************************************ 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:00.998 * Looking for test storage... 00:42:00.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:00.998 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:00.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.998 --rc genhtml_branch_coverage=1 00:42:00.998 --rc genhtml_function_coverage=1 00:42:00.999 --rc genhtml_legend=1 00:42:00.999 --rc geninfo_all_blocks=1 00:42:00.999 --rc geninfo_unexecuted_blocks=1 00:42:00.999 00:42:00.999 ' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:00.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.999 --rc genhtml_branch_coverage=1 00:42:00.999 --rc genhtml_function_coverage=1 00:42:00.999 --rc genhtml_legend=1 00:42:00.999 --rc geninfo_all_blocks=1 00:42:00.999 --rc geninfo_unexecuted_blocks=1 00:42:00.999 00:42:00.999 ' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:00.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.999 --rc genhtml_branch_coverage=1 00:42:00.999 --rc genhtml_function_coverage=1 00:42:00.999 --rc genhtml_legend=1 00:42:00.999 --rc geninfo_all_blocks=1 00:42:00.999 --rc geninfo_unexecuted_blocks=1 00:42:00.999 00:42:00.999 ' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:00.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.999 --rc genhtml_branch_coverage=1 00:42:00.999 --rc genhtml_function_coverage=1 00:42:00.999 --rc genhtml_legend=1 00:42:00.999 --rc geninfo_all_blocks=1 00:42:00.999 --rc geninfo_unexecuted_blocks=1 00:42:00.999 00:42:00.999 ' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:00.999 20:11:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:02.901 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:02.901 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:02.901 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:02.901 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:02.901 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:02.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:02.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:02.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:02.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:02.902 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:03.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:03.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:42:03.162 00:42:03.162 --- 10.0.0.2 ping statistics --- 00:42:03.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:03.162 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:03.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:03.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:42:03.162 00:42:03.162 --- 10.0.0.1 ping statistics --- 00:42:03.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:03.162 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=3214811 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 3214811 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3214811 ']' 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:03.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:03.162 20:11:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:03.162 [2024-10-13 20:11:52.874372] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:03.162 [2024-10-13 20:11:52.876981] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:42:03.162 [2024-10-13 20:11:52.877082] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:03.422 [2024-10-13 20:11:53.021827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:03.422 [2024-10-13 20:11:53.161091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:03.422 [2024-10-13 20:11:53.161190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:03.422 [2024-10-13 20:11:53.161220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:03.422 [2024-10-13 20:11:53.161241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:03.422 [2024-10-13 20:11:53.161266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:03.422 [2024-10-13 20:11:53.163969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:03.422 [2024-10-13 20:11:53.163977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:03.992 [2024-10-13 20:11:53.498903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:03.992 [2024-10-13 20:11:53.499571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:03.992 [2024-10-13 20:11:53.499865] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:04.251 5000+0 records in 00:42:04.251 5000+0 records out 00:42:04.251 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0142802 s, 717 MB/s 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:04.251 AIO0 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:04.251 [2024-10-13 20:11:53.928993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:04.251 [2024-10-13 20:11:53.957322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3214811 0 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3214811 0 idle 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3214811 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:04.251 20:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214811 root 20 0 20.1t 195456 100608 S 6.7 0.3 0:00.72 reactor_0' 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214811 root 20 0 20.1t 195456 100608 S 6.7 0.3 0:00.72 reactor_0 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3214811 1 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3214811 1 idle 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3214811 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214821 root 20 0 20.1t 195456 100608 S 0.0 0.3 0:00.00 reactor_1' 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214821 root 20 0 20.1t 195456 100608 S 0.0 0.3 0:00.00 reactor_1 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:04.511 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3214988 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3214811 0 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3214811 0 busy 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3214811 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:04.512 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:04.770 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214811 root 20 0 20.1t 196992 101376 S 6.7 0.3 0:00.73 reactor_0' 00:42:04.771 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214811 root 20 0 20.1t 196992 101376 S 6.7 0.3 0:00.73 reactor_0 00:42:04.771 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:04.771 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:04.771 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:42:04.771 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:42:04.771 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:04.771 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:04.771 20:11:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:05.707 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:05.707 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:05.707 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:05.707 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214811 root 20 0 20.1t 209664 101760 R 99.9 0.3 0:02.98 reactor_0' 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214811 root 20 0 20.1t 209664 101760 R 99.9 0.3 0:02.98 reactor_0 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3214811 1 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3214811 1 busy 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3214811 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:05.968 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214821 root 20 0 20.1t 209664 101760 R 99.9 0.3 0:01.30 reactor_1' 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214821 root 20 0 20.1t 209664 101760 R 99.9 0.3 0:01.30 reactor_1 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:06.227 20:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3214988 00:42:16.207 Initializing NVMe Controllers 00:42:16.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:16.207 Controller IO queue size 256, less than required. 00:42:16.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:16.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:16.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:16.207 Initialization complete. Launching workers. 00:42:16.207 ======================================================== 00:42:16.207 Latency(us) 00:42:16.207 Device Information : IOPS MiB/s Average min max 00:42:16.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 11253.95 43.96 22766.33 6707.48 27717.07 00:42:16.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10873.56 42.47 23561.87 6791.32 64673.37 00:42:16.207 ======================================================== 00:42:16.207 Total : 22127.51 86.44 23157.26 6707.48 64673.37 00:42:16.207 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3214811 0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3214811 0 idle 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3214811 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214811 root 20 0 20.1t 209664 101760 S 0.0 0.3 0:20.68 reactor_0' 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214811 root 20 0 20.1t 209664 101760 S 0.0 0.3 0:20.68 reactor_0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3214811 1 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3214811 1 idle 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3214811 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214821 root 20 0 20.1t 209664 101760 S 0.0 0.3 0:09.98 reactor_1' 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214821 root 20 0 20.1t 209664 101760 S 0.0 0.3 0:09.98 reactor_1 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:16.207 20:12:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:16.207 20:12:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:16.207 20:12:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:42:16.207 20:12:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:16.207 20:12:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:42:16.207 20:12:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3214811 0 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3214811 0 idle 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3214811 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:17.587 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214811 root 20 0 20.1t 237312 111360 S 0.0 0.4 0:20.88 reactor_0' 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214811 root 20 0 20.1t 237312 111360 S 0.0 0.4 0:20.88 reactor_0 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3214811 1 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3214811 1 idle 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3214811 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3214811 -w 256 00:42:17.847 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3214821 root 20 0 20.1t 237312 111360 S 0.0 0.4 0:10.07 reactor_1' 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3214821 root 20 0 20.1t 237312 111360 S 0.0 0.4 0:10.07 reactor_1 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:18.106 20:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:18.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:18.365 20:12:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:18.365 rmmod nvme_tcp 00:42:18.365 rmmod nvme_fabrics 00:42:18.365 rmmod nvme_keyring 00:42:18.365 20:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:18.365 20:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:18.365 20:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:18.365 20:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 3214811 ']' 00:42:18.365 20:12:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 3214811 00:42:18.365 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3214811 ']' 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3214811 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3214811 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3214811' 00:42:18.366 killing process with pid 3214811 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3214811 00:42:18.366 20:12:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3214811 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:19.299 20:12:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:21.839 20:12:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:21.839 00:42:21.839 real 0m20.552s 00:42:21.839 user 0m39.395s 00:42:21.839 sys 0m6.521s 00:42:21.839 20:12:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:21.839 20:12:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:21.839 ************************************ 00:42:21.839 END TEST nvmf_interrupt 00:42:21.839 ************************************ 00:42:21.839 00:42:21.839 real 35m19.684s 00:42:21.839 user 93m5.965s 00:42:21.839 sys 7m45.401s 00:42:21.839 20:12:11 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:21.839 20:12:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:21.839 ************************************ 00:42:21.839 END TEST nvmf_tcp 00:42:21.839 ************************************ 00:42:21.839 20:12:11 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:42:21.839 20:12:11 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:21.839 20:12:11 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:21.839 20:12:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:21.839 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:42:21.839 ************************************ 00:42:21.839 START TEST spdkcli_nvmf_tcp 00:42:21.839 ************************************ 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:21.839 * Looking for test storage... 00:42:21.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:21.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:21.839 --rc genhtml_branch_coverage=1 00:42:21.839 --rc genhtml_function_coverage=1 00:42:21.839 --rc genhtml_legend=1 00:42:21.839 --rc geninfo_all_blocks=1 00:42:21.839 --rc geninfo_unexecuted_blocks=1 00:42:21.839 00:42:21.839 ' 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:21.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:21.839 --rc genhtml_branch_coverage=1 00:42:21.839 --rc genhtml_function_coverage=1 00:42:21.839 --rc genhtml_legend=1 00:42:21.839 --rc geninfo_all_blocks=1 00:42:21.839 --rc geninfo_unexecuted_blocks=1 00:42:21.839 00:42:21.839 ' 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:21.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:21.839 --rc genhtml_branch_coverage=1 00:42:21.839 --rc genhtml_function_coverage=1 00:42:21.839 --rc genhtml_legend=1 00:42:21.839 --rc geninfo_all_blocks=1 00:42:21.839 --rc geninfo_unexecuted_blocks=1 00:42:21.839 00:42:21.839 ' 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:21.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:21.839 --rc genhtml_branch_coverage=1 00:42:21.839 --rc genhtml_function_coverage=1 00:42:21.839 --rc genhtml_legend=1 00:42:21.839 --rc geninfo_all_blocks=1 00:42:21.839 --rc geninfo_unexecuted_blocks=1 00:42:21.839 00:42:21.839 ' 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:21.839 20:12:11 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:21.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3217737 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3217737 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3217737 ']' 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:21.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:21.840 20:12:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:21.840 [2024-10-13 20:12:11.431310] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:42:21.840 [2024-10-13 20:12:11.431466] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217737 ] 00:42:21.840 [2024-10-13 20:12:11.559239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:22.099 [2024-10-13 20:12:11.689352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:22.099 [2024-10-13 20:12:11.689353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:22.665 20:12:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:22.665 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:22.665 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:22.665 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:22.665 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:22.665 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:22.665 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:22.665 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:22.665 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:22.665 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:22.665 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:22.665 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:22.665 ' 00:42:26.001 [2024-10-13 20:12:15.215739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:26.940 [2024-10-13 20:12:16.493791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:29.486 [2024-10-13 20:12:18.869493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:31.394 [2024-10-13 20:12:20.916456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:32.775 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:32.775 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:32.775 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:32.775 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:32.775 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:32.775 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:32.775 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:32.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:32.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:32.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:32.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:32.775 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:32.775 20:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:32.775 20:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:32.775 20:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:33.035 20:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:33.035 20:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:33.035 20:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:33.035 20:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:33.035 20:12:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:33.294 20:12:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:33.552 20:12:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:33.552 20:12:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:33.552 20:12:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:33.552 20:12:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:33.552 20:12:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:33.552 20:12:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:33.552 20:12:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:33.552 20:12:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:33.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:33.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:33.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:33.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:33.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:33.552 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:33.552 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:33.552 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:33.552 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:33.552 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:33.552 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:33.552 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:33.552 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:33.552 ' 00:42:40.123 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:40.123 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:40.123 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:40.123 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:40.123 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:40.123 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:40.123 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:40.123 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:40.123 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:40.123 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:40.123 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:40.123 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:40.123 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:40.123 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3217737 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3217737 ']' 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3217737 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3217737 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3217737' 00:42:40.123 killing process with pid 3217737 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3217737 00:42:40.123 20:12:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3217737 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3217737 ']' 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3217737 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3217737 ']' 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3217737 00:42:40.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3217737) - No such process 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3217737 is not found' 00:42:40.691 Process with pid 3217737 is not found 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:40.691 00:42:40.691 real 0m19.044s 00:42:40.691 user 0m39.990s 00:42:40.691 sys 0m1.045s 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:40.691 20:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:40.691 ************************************ 00:42:40.691 END TEST spdkcli_nvmf_tcp 00:42:40.691 ************************************ 00:42:40.691 20:12:30 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:40.691 20:12:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:40.691 20:12:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:40.691 20:12:30 -- common/autotest_common.sh@10 -- # set +x 00:42:40.691 ************************************ 00:42:40.691 START TEST nvmf_identify_passthru 00:42:40.691 ************************************ 00:42:40.691 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:40.691 * Looking for test storage... 00:42:40.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:40.691 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:40.691 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:42:40.691 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:40.691 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:40.691 20:12:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:40.692 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:40.692 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:40.692 --rc genhtml_branch_coverage=1 00:42:40.692 --rc genhtml_function_coverage=1 00:42:40.692 --rc genhtml_legend=1 00:42:40.692 --rc geninfo_all_blocks=1 00:42:40.692 --rc geninfo_unexecuted_blocks=1 00:42:40.692 00:42:40.692 ' 00:42:40.692 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:40.692 --rc genhtml_branch_coverage=1 00:42:40.692 --rc genhtml_function_coverage=1 00:42:40.692 --rc genhtml_legend=1 00:42:40.692 --rc geninfo_all_blocks=1 00:42:40.692 --rc geninfo_unexecuted_blocks=1 00:42:40.692 00:42:40.692 ' 00:42:40.692 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:40.692 --rc genhtml_branch_coverage=1 00:42:40.692 --rc genhtml_function_coverage=1 00:42:40.692 --rc genhtml_legend=1 00:42:40.692 --rc geninfo_all_blocks=1 00:42:40.692 --rc geninfo_unexecuted_blocks=1 00:42:40.692 00:42:40.692 ' 00:42:40.692 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:40.692 --rc genhtml_branch_coverage=1 00:42:40.692 --rc genhtml_function_coverage=1 00:42:40.692 --rc genhtml_legend=1 00:42:40.692 --rc geninfo_all_blocks=1 00:42:40.692 --rc geninfo_unexecuted_blocks=1 00:42:40.692 00:42:40.692 ' 00:42:40.692 20:12:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:40.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:40.692 20:12:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:40.692 20:12:30 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:40.692 20:12:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.692 20:12:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:40.692 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:40.692 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:40.692 20:12:30 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:40.692 20:12:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:42.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:42.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:42.598 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:42.598 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:42.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:42.599 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:42.859 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:42.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:42.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:42:42.859 00:42:42.860 --- 10.0.0.2 ping statistics --- 00:42:42.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:42.860 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:42.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:42.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:42:42.860 00:42:42.860 --- 10.0.0.1 ping statistics --- 00:42:42.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:42.860 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:42.860 20:12:32 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:42.860 20:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:42.860 20:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:42:42.860 20:12:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:42:42.860 20:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:42:42.860 20:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:42:42.860 20:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:42:42.860 20:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:42.860 20:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:48.141 20:12:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:42:48.141 20:12:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:42:48.141 20:12:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:48.141 20:12:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:52.339 20:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:52.339 20:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:52.339 20:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:52.339 20:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3222636 00:42:52.339 20:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:52.339 20:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:52.339 20:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3222636 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3222636 ']' 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:52.339 20:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:52.339 [2024-10-13 20:12:41.564464] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:42:52.339 [2024-10-13 20:12:41.564623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:52.339 [2024-10-13 20:12:41.698531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:52.339 [2024-10-13 20:12:41.836519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:52.339 [2024-10-13 20:12:41.836583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:52.339 [2024-10-13 20:12:41.836606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:52.339 [2024-10-13 20:12:41.836627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:52.339 [2024-10-13 20:12:41.836644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:52.339 [2024-10-13 20:12:41.839492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:52.339 [2024-10-13 20:12:41.839533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:52.339 [2024-10-13 20:12:41.839567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.339 [2024-10-13 20:12:41.839571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:52.907 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:52.907 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:42:52.907 20:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:52.907 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.907 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:52.907 INFO: Log level set to 20 00:42:52.907 INFO: Requests: 00:42:52.907 { 00:42:52.907 "jsonrpc": "2.0", 00:42:52.907 "method": "nvmf_set_config", 00:42:52.907 "id": 1, 00:42:52.907 "params": { 00:42:52.907 "admin_cmd_passthru": { 00:42:52.907 "identify_ctrlr": true 00:42:52.907 } 00:42:52.907 } 00:42:52.907 } 00:42:52.907 00:42:52.907 INFO: response: 00:42:52.907 { 00:42:52.907 "jsonrpc": "2.0", 00:42:52.907 "id": 1, 00:42:52.907 "result": true 00:42:52.907 } 00:42:52.907 00:42:52.907 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.907 20:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:52.907 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.907 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:52.907 INFO: Setting log level to 20 00:42:52.907 INFO: Setting log level to 20 00:42:52.907 INFO: Log level set to 20 00:42:52.907 INFO: Log level set to 20 00:42:52.907 INFO: Requests: 00:42:52.907 { 00:42:52.907 "jsonrpc": "2.0", 00:42:52.907 "method": "framework_start_init", 00:42:52.907 "id": 1 00:42:52.907 } 00:42:52.907 00:42:52.907 INFO: Requests: 00:42:52.907 { 00:42:52.907 "jsonrpc": "2.0", 00:42:52.907 "method": "framework_start_init", 00:42:52.907 "id": 1 00:42:52.907 } 00:42:52.907 00:42:53.166 [2024-10-13 20:12:42.919909] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:53.166 INFO: response: 00:42:53.166 { 00:42:53.166 "jsonrpc": "2.0", 00:42:53.166 "id": 1, 00:42:53.166 "result": true 00:42:53.166 } 00:42:53.166 00:42:53.166 INFO: response: 00:42:53.166 { 00:42:53.166 "jsonrpc": "2.0", 00:42:53.166 "id": 1, 00:42:53.166 "result": true 00:42:53.166 } 00:42:53.166 00:42:53.166 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:53.166 20:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:53.166 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:53.166 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:53.166 INFO: Setting log level to 40 00:42:53.166 INFO: Setting log level to 40 00:42:53.166 INFO: Setting log level to 40 00:42:53.166 [2024-10-13 20:12:42.932830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:53.166 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:53.166 20:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:53.166 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:53.166 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:53.166 20:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:42:53.166 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:53.166 20:12:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:56.450 Nvme0n1 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.450 20:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.450 20:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.450 20:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:56.450 [2024-10-13 20:12:45.896372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.450 20:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:56.450 [ 00:42:56.450 { 00:42:56.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:56.450 "subtype": "Discovery", 00:42:56.450 "listen_addresses": [], 00:42:56.450 "allow_any_host": true, 00:42:56.450 "hosts": [] 00:42:56.450 }, 00:42:56.450 { 00:42:56.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:56.450 "subtype": "NVMe", 00:42:56.450 "listen_addresses": [ 00:42:56.450 { 00:42:56.450 "trtype": "TCP", 00:42:56.450 "adrfam": "IPv4", 00:42:56.450 "traddr": "10.0.0.2", 00:42:56.450 "trsvcid": "4420" 00:42:56.450 } 00:42:56.450 ], 00:42:56.450 "allow_any_host": true, 00:42:56.450 "hosts": [], 00:42:56.450 "serial_number": "SPDK00000000000001", 00:42:56.450 "model_number": "SPDK bdev Controller", 00:42:56.450 "max_namespaces": 1, 00:42:56.450 "min_cntlid": 1, 00:42:56.450 "max_cntlid": 65519, 00:42:56.450 "namespaces": [ 00:42:56.450 { 00:42:56.450 "nsid": 1, 00:42:56.450 "bdev_name": "Nvme0n1", 00:42:56.450 "name": "Nvme0n1", 00:42:56.450 "nguid": "9D1CFE7DFBB04291B2B9EE93A1AE74A4", 00:42:56.450 "uuid": "9d1cfe7d-fbb0-4291-b2b9-ee93a1ae74a4" 00:42:56.450 } 00:42:56.450 ] 00:42:56.450 } 00:42:56.450 ] 00:42:56.450 20:12:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.450 20:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:56.450 20:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:56.451 20:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:56.709 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:42:56.709 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:56.709 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:56.709 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:56.968 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:56.968 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:42:56.968 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:56.968 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.968 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:56.968 20:12:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:56.968 rmmod nvme_tcp 00:42:56.968 rmmod nvme_fabrics 00:42:56.968 rmmod nvme_keyring 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 3222636 ']' 00:42:56.968 20:12:46 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 3222636 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3222636 ']' 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3222636 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3222636 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3222636' 00:42:56.968 killing process with pid 3222636 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3222636 00:42:56.968 20:12:46 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3222636 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:59.520 20:12:49 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:59.520 20:12:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:59.520 20:12:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:01.430 20:12:51 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:01.430 00:43:01.430 real 0m20.858s 00:43:01.430 user 0m34.073s 00:43:01.430 sys 0m3.598s 00:43:01.430 20:12:51 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:01.430 20:12:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:01.430 ************************************ 00:43:01.430 END TEST nvmf_identify_passthru 00:43:01.430 ************************************ 00:43:01.430 20:12:51 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:01.430 20:12:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:01.430 20:12:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:01.430 20:12:51 -- common/autotest_common.sh@10 -- # set +x 00:43:01.430 ************************************ 00:43:01.430 START TEST nvmf_dif 00:43:01.430 ************************************ 00:43:01.430 20:12:51 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:01.430 * Looking for test storage... 00:43:01.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:01.430 20:12:51 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:01.430 20:12:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:43:01.430 20:12:51 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:01.689 20:12:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:01.689 20:12:51 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:01.689 20:12:51 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:01.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.689 --rc genhtml_branch_coverage=1 00:43:01.689 --rc genhtml_function_coverage=1 00:43:01.689 --rc genhtml_legend=1 00:43:01.689 --rc geninfo_all_blocks=1 00:43:01.689 --rc geninfo_unexecuted_blocks=1 00:43:01.689 00:43:01.689 ' 00:43:01.689 20:12:51 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:01.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.689 --rc genhtml_branch_coverage=1 00:43:01.689 --rc genhtml_function_coverage=1 00:43:01.689 --rc genhtml_legend=1 00:43:01.689 --rc geninfo_all_blocks=1 00:43:01.689 --rc geninfo_unexecuted_blocks=1 00:43:01.689 00:43:01.689 ' 00:43:01.689 20:12:51 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:01.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.689 --rc genhtml_branch_coverage=1 00:43:01.689 --rc genhtml_function_coverage=1 00:43:01.689 --rc genhtml_legend=1 00:43:01.689 --rc geninfo_all_blocks=1 00:43:01.689 --rc geninfo_unexecuted_blocks=1 00:43:01.689 00:43:01.689 ' 00:43:01.689 20:12:51 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:01.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.689 --rc genhtml_branch_coverage=1 00:43:01.689 --rc genhtml_function_coverage=1 00:43:01.689 --rc genhtml_legend=1 00:43:01.689 --rc geninfo_all_blocks=1 00:43:01.689 --rc geninfo_unexecuted_blocks=1 00:43:01.689 00:43:01.689 ' 00:43:01.689 20:12:51 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:01.689 20:12:51 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:01.689 20:12:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.689 20:12:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.689 20:12:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.689 20:12:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:01.689 20:12:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:01.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:01.689 20:12:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:01.689 20:12:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:01.689 20:12:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:01.690 20:12:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:01.690 20:12:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:01.690 20:12:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:01.690 20:12:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:01.690 20:12:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:01.690 20:12:51 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:01.690 20:12:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:03.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:03.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:03.596 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:03.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:03.596 20:12:53 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:03.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:03.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:43:03.855 00:43:03.855 --- 10.0.0.2 ping statistics --- 00:43:03.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:03.855 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:03.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:03.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:43:03.855 00:43:03.855 --- 10.0.0.1 ping statistics --- 00:43:03.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:03.855 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:43:03.855 20:12:53 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:04.794 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:04.794 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:04.794 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:04.794 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:04.794 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:04.794 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:04.794 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:04.794 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:04.794 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:04.794 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:04.794 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:04.794 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:04.794 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:04.794 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:04.794 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:04.794 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:04.794 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:05.054 20:12:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:05.054 20:12:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:05.054 20:12:54 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:05.054 20:12:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=3226166 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:05.054 20:12:54 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 3226166 00:43:05.054 20:12:54 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3226166 ']' 00:43:05.054 20:12:54 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.054 20:12:54 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:05.054 20:12:54 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.054 20:12:54 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:05.054 20:12:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:05.054 [2024-10-13 20:12:54.782640] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:43:05.054 [2024-10-13 20:12:54.782813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.313 [2024-10-13 20:12:54.920654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.313 [2024-10-13 20:12:55.052438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:05.313 [2024-10-13 20:12:55.052525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:05.313 [2024-10-13 20:12:55.052552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:05.313 [2024-10-13 20:12:55.052576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:05.313 [2024-10-13 20:12:55.052596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:05.313 [2024-10-13 20:12:55.054207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:43:06.247 20:12:55 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:06.247 20:12:55 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:06.247 20:12:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:06.247 20:12:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:06.247 [2024-10-13 20:12:55.755987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.247 20:12:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:06.247 20:12:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:06.247 ************************************ 00:43:06.247 START TEST fio_dif_1_default 00:43:06.247 ************************************ 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:06.247 bdev_null0 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:06.247 [2024-10-13 20:12:55.812302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:06.247 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:06.248 { 00:43:06.248 "params": { 00:43:06.248 "name": "Nvme$subsystem", 00:43:06.248 "trtype": "$TEST_TRANSPORT", 00:43:06.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:06.248 "adrfam": "ipv4", 00:43:06.248 "trsvcid": "$NVMF_PORT", 00:43:06.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:06.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:06.248 "hdgst": ${hdgst:-false}, 00:43:06.248 "ddgst": ${ddgst:-false} 00:43:06.248 }, 00:43:06.248 "method": "bdev_nvme_attach_controller" 00:43:06.248 } 00:43:06.248 EOF 00:43:06.248 )") 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:06.248 "params": { 00:43:06.248 "name": "Nvme0", 00:43:06.248 "trtype": "tcp", 00:43:06.248 "traddr": "10.0.0.2", 00:43:06.248 "adrfam": "ipv4", 00:43:06.248 "trsvcid": "4420", 00:43:06.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:06.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:06.248 "hdgst": false, 00:43:06.248 "ddgst": false 00:43:06.248 }, 00:43:06.248 "method": "bdev_nvme_attach_controller" 00:43:06.248 }' 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:06.248 20:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:06.506 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:06.506 fio-3.35 00:43:06.506 Starting 1 thread 00:43:18.751 00:43:18.751 filename0: (groupid=0, jobs=1): err= 0: pid=3226517: Sun Oct 13 20:13:07 2024 00:43:18.751 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:43:18.751 slat (nsec): min=5354, max=56109, avg=15540.21, stdev=5842.04 00:43:18.751 clat (usec): min=40788, max=43879, avg=40968.14, stdev=197.54 00:43:18.751 lat (usec): min=40802, max=43901, avg=40983.68, stdev=197.70 00:43:18.751 clat percentiles (usec): 00:43:18.751 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:18.751 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:18.751 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:18.751 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:43:18.751 | 99.99th=[43779] 00:43:18.751 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:43:18.751 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:18.751 lat (msec) : 50=100.00% 00:43:18.751 cpu : usr=92.48%, sys=6.98%, ctx=13, majf=0, minf=1635 00:43:18.751 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.751 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.751 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:18.751 00:43:18.751 Run status group 0 (all jobs): 00:43:18.751 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10008-10008msec 00:43:18.751 ----------------------------------------------------- 00:43:18.751 Suppressions used: 00:43:18.751 count bytes template 00:43:18.751 1 8 /usr/src/fio/parse.c 00:43:18.751 1 8 libtcmalloc_minimal.so 00:43:18.751 1 904 libcrypto.so 00:43:18.751 ----------------------------------------------------- 00:43:18.751 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.751 00:43:18.751 real 0m12.308s 00:43:18.751 user 0m11.385s 00:43:18.751 sys 0m1.137s 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:18.751 20:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:18.751 ************************************ 00:43:18.751 END TEST fio_dif_1_default 00:43:18.751 ************************************ 00:43:18.751 20:13:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:18.751 20:13:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:18.752 20:13:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 ************************************ 00:43:18.752 START TEST fio_dif_1_multi_subsystems 00:43:18.752 ************************************ 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 bdev_null0 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 [2024-10-13 20:13:08.168591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 bdev_null1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:18.752 { 00:43:18.752 "params": { 00:43:18.752 "name": "Nvme$subsystem", 00:43:18.752 "trtype": "$TEST_TRANSPORT", 00:43:18.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:18.752 "adrfam": "ipv4", 00:43:18.752 "trsvcid": "$NVMF_PORT", 00:43:18.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:18.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:18.752 "hdgst": ${hdgst:-false}, 00:43:18.752 "ddgst": ${ddgst:-false} 00:43:18.752 }, 00:43:18.752 "method": "bdev_nvme_attach_controller" 00:43:18.752 } 00:43:18.752 EOF 00:43:18.752 )") 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:18.752 { 00:43:18.752 "params": { 00:43:18.752 "name": "Nvme$subsystem", 00:43:18.752 "trtype": "$TEST_TRANSPORT", 00:43:18.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:18.752 "adrfam": "ipv4", 00:43:18.752 "trsvcid": "$NVMF_PORT", 00:43:18.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:18.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:18.752 "hdgst": ${hdgst:-false}, 00:43:18.752 "ddgst": ${ddgst:-false} 00:43:18.752 }, 00:43:18.752 "method": "bdev_nvme_attach_controller" 00:43:18.752 } 00:43:18.752 EOF 00:43:18.752 )") 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:18.752 "params": { 00:43:18.752 "name": "Nvme0", 00:43:18.752 "trtype": "tcp", 00:43:18.752 "traddr": "10.0.0.2", 00:43:18.752 "adrfam": "ipv4", 00:43:18.752 "trsvcid": "4420", 00:43:18.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:18.752 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:18.752 "hdgst": false, 00:43:18.752 "ddgst": false 00:43:18.752 }, 00:43:18.752 "method": "bdev_nvme_attach_controller" 00:43:18.752 },{ 00:43:18.752 "params": { 00:43:18.752 "name": "Nvme1", 00:43:18.752 "trtype": "tcp", 00:43:18.752 "traddr": "10.0.0.2", 00:43:18.752 "adrfam": "ipv4", 00:43:18.752 "trsvcid": "4420", 00:43:18.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:18.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:18.752 "hdgst": false, 00:43:18.752 "ddgst": false 00:43:18.752 }, 00:43:18.752 "method": "bdev_nvme_attach_controller" 00:43:18.752 }' 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:18.752 20:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:18.752 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:18.752 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:18.752 fio-3.35 00:43:18.752 Starting 2 threads 00:43:30.989 00:43:30.989 filename0: (groupid=0, jobs=1): err= 0: pid=3228044: Sun Oct 13 20:13:19 2024 00:43:30.989 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10006msec) 00:43:30.989 slat (nsec): min=5271, max=74270, avg=14858.41, stdev=4942.47 00:43:30.989 clat (usec): min=672, max=44773, avg=21019.25, stdev=20208.49 00:43:30.989 lat (usec): min=684, max=44789, avg=21034.11, stdev=20208.36 00:43:30.989 clat percentiles (usec): 00:43:30.989 | 1.00th=[ 693], 5.00th=[ 701], 10.00th=[ 717], 20.00th=[ 742], 00:43:30.989 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:43:30.989 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:30.989 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:43:30.989 | 99.99th=[44827] 00:43:30.989 bw ( KiB/s): min= 640, max= 800, per=49.75%, avg=758.40, stdev=34.59, samples=20 00:43:30.989 iops : min= 160, max= 200, avg=189.60, stdev= 8.65, samples=20 00:43:30.989 lat (usec) : 750=21.53%, 1000=27.63% 00:43:30.989 lat (msec) : 2=0.74%, 50=50.11% 00:43:30.989 cpu : usr=94.18%, sys=5.30%, ctx=15, majf=0, minf=1635 00:43:30.989 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:30.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:30.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:30.989 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:30.989 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:30.989 filename1: (groupid=0, jobs=1): err= 0: pid=3228045: Sun Oct 13 20:13:19 2024 00:43:30.989 read: IOPS=191, BW=764KiB/s (783kB/s)(7648KiB/10008msec) 00:43:30.989 slat (nsec): min=5320, max=41945, avg=14604.39, stdev=4274.05 00:43:30.989 clat (usec): min=666, max=44888, avg=20892.39, stdev=20197.20 00:43:30.989 lat (usec): min=678, max=44930, avg=20907.00, stdev=20197.07 00:43:30.989 clat percentiles (usec): 00:43:30.989 | 1.00th=[ 693], 5.00th=[ 717], 10.00th=[ 725], 20.00th=[ 750], 00:43:30.989 | 30.00th=[ 783], 40.00th=[ 824], 50.00th=[ 1303], 60.00th=[41157], 00:43:30.989 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:30.989 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:43:30.989 | 99.99th=[44827] 00:43:30.989 bw ( KiB/s): min= 672, max= 864, per=50.08%, avg=763.20, stdev=44.38, samples=20 00:43:30.989 iops : min= 168, max= 216, avg=190.80, stdev=11.10, samples=20 00:43:30.989 lat (usec) : 750=18.83%, 1000=30.13% 00:43:30.989 lat (msec) : 2=1.26%, 50=49.79% 00:43:30.989 cpu : usr=94.16%, sys=5.32%, ctx=14, majf=0, minf=1634 00:43:30.989 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:30.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:30.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:30.989 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:30.989 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:30.989 00:43:30.989 Run status group 0 (all jobs): 00:43:30.989 READ: bw=1524KiB/s (1560kB/s), 760KiB/s-764KiB/s (778kB/s-783kB/s), io=14.9MiB (15.6MB), run=10006-10008msec 00:43:30.989 ----------------------------------------------------- 00:43:30.989 Suppressions used: 00:43:30.989 count bytes template 00:43:30.989 2 16 /usr/src/fio/parse.c 00:43:30.989 1 8 libtcmalloc_minimal.so 00:43:30.989 1 904 libcrypto.so 00:43:30.989 ----------------------------------------------------- 00:43:30.989 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.989 00:43:30.989 real 0m12.545s 00:43:30.989 user 0m21.271s 00:43:30.989 sys 0m1.599s 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:30.989 20:13:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:30.989 ************************************ 00:43:30.989 END TEST fio_dif_1_multi_subsystems 00:43:30.989 ************************************ 00:43:30.989 20:13:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:30.989 20:13:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:30.989 20:13:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:30.989 20:13:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.989 ************************************ 00:43:30.989 START TEST fio_dif_rand_params 00:43:30.989 ************************************ 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:30.989 bdev_null0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:30.989 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:30.990 [2024-10-13 20:13:20.759357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:30.990 { 00:43:30.990 "params": { 00:43:30.990 "name": "Nvme$subsystem", 00:43:30.990 "trtype": "$TEST_TRANSPORT", 00:43:30.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:30.990 "adrfam": "ipv4", 00:43:30.990 "trsvcid": "$NVMF_PORT", 00:43:30.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:30.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:30.990 "hdgst": ${hdgst:-false}, 00:43:30.990 "ddgst": ${ddgst:-false} 00:43:30.990 }, 00:43:30.990 "method": "bdev_nvme_attach_controller" 00:43:30.990 } 00:43:30.990 EOF 00:43:30.990 )") 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:30.990 "params": { 00:43:30.990 "name": "Nvme0", 00:43:30.990 "trtype": "tcp", 00:43:30.990 "traddr": "10.0.0.2", 00:43:30.990 "adrfam": "ipv4", 00:43:30.990 "trsvcid": "4420", 00:43:30.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:30.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:30.990 "hdgst": false, 00:43:30.990 "ddgst": false 00:43:30.990 }, 00:43:30.990 "method": "bdev_nvme_attach_controller" 00:43:30.990 }' 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:30.990 20:13:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.556 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:31.556 ... 00:43:31.556 fio-3.35 00:43:31.556 Starting 3 threads 00:43:38.115 00:43:38.115 filename0: (groupid=0, jobs=1): err= 0: pid=3229456: Sun Oct 13 20:13:27 2024 00:43:38.115 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(127MiB/5047msec) 00:43:38.115 slat (nsec): min=7702, max=58724, avg=18855.91, stdev=3988.38 00:43:38.115 clat (usec): min=6417, max=55582, avg=14807.36, stdev=4040.32 00:43:38.115 lat (usec): min=6428, max=55640, avg=14826.22, stdev=4040.37 00:43:38.115 clat percentiles (usec): 00:43:38.115 | 1.00th=[ 9372], 5.00th=[11600], 10.00th=[12256], 20.00th=[12911], 00:43:38.115 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14353], 60.00th=[14877], 00:43:38.115 | 70.00th=[15533], 80.00th=[16188], 90.00th=[17171], 95.00th=[18220], 00:43:38.115 | 99.00th=[20317], 99.50th=[54264], 99.90th=[55313], 99.95th=[55837], 00:43:38.115 | 99.99th=[55837] 00:43:38.115 bw ( KiB/s): min=23552, max=27904, per=37.12%, avg=25984.00, stdev=1569.99, samples=10 00:43:38.115 iops : min= 184, max= 218, avg=203.00, stdev=12.27, samples=10 00:43:38.115 lat (msec) : 10=2.26%, 20=96.66%, 50=0.39%, 100=0.69% 00:43:38.115 cpu : usr=92.13%, sys=7.29%, ctx=13, majf=0, minf=1634 00:43:38.115 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:38.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:38.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:38.115 issued rwts: total=1018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:38.115 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:38.115 filename0: (groupid=0, jobs=1): err= 0: pid=3229457: Sun Oct 13 20:13:27 2024 00:43:38.115 read: IOPS=173, BW=21.7MiB/s (22.8MB/s)(110MiB/5045msec) 00:43:38.115 slat (nsec): min=7097, max=51174, avg=19380.87, stdev=4417.61 00:43:38.115 clat (usec): min=8061, max=56997, avg=17202.02, stdev=5603.84 00:43:38.115 lat (usec): min=8071, max=57016, avg=17221.40, stdev=5603.43 00:43:38.115 clat percentiles (usec): 00:43:38.115 | 1.00th=[11994], 5.00th=[13042], 10.00th=[13698], 20.00th=[14615], 00:43:38.115 | 30.00th=[15533], 40.00th=[16188], 50.00th=[16581], 60.00th=[17171], 00:43:38.115 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19268], 95.00th=[20055], 00:43:38.115 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:43:38.115 | 99.99th=[56886] 00:43:38.115 bw ( KiB/s): min=16929, max=25600, per=31.97%, avg=22377.70, stdev=2332.11, samples=10 00:43:38.115 iops : min= 132, max= 200, avg=174.80, stdev=18.29, samples=10 00:43:38.115 lat (msec) : 10=0.34%, 20=94.29%, 50=3.54%, 100=1.83% 00:43:38.115 cpu : usr=92.19%, sys=7.24%, ctx=10, majf=0, minf=1634 00:43:38.115 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:38.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:38.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:38.115 issued rwts: total=876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:38.115 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:38.115 filename0: (groupid=0, jobs=1): err= 0: pid=3229458: Sun Oct 13 20:13:27 2024 00:43:38.115 read: IOPS=173, BW=21.6MiB/s (22.7MB/s)(108MiB/5005msec) 00:43:38.115 slat (nsec): min=7149, max=43786, avg=19218.64, stdev=3954.83 00:43:38.115 clat (usec): min=4813, max=52487, avg=17313.55, stdev=4141.27 00:43:38.115 lat (usec): min=4831, max=52511, avg=17332.77, stdev=4141.23 00:43:38.115 clat percentiles (usec): 00:43:38.115 | 1.00th=[10683], 5.00th=[13042], 10.00th=[13698], 20.00th=[14615], 00:43:38.115 | 30.00th=[15664], 40.00th=[16450], 50.00th=[17171], 60.00th=[17957], 00:43:38.115 | 70.00th=[18482], 80.00th=[19268], 90.00th=[20055], 95.00th=[21103], 00:43:38.115 | 99.00th=[46924], 99.50th=[47449], 99.90th=[52691], 99.95th=[52691], 00:43:38.115 | 99.99th=[52691] 00:43:38.115 bw ( KiB/s): min=19200, max=24320, per=31.56%, avg=22092.80, stdev=1685.41, samples=10 00:43:38.115 iops : min= 150, max= 190, avg=172.60, stdev=13.17, samples=10 00:43:38.115 lat (msec) : 10=0.58%, 20=88.34%, 50=10.74%, 100=0.35% 00:43:38.115 cpu : usr=92.29%, sys=7.13%, ctx=8, majf=0, minf=1632 00:43:38.116 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:38.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:38.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:38.116 issued rwts: total=866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:38.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:38.116 00:43:38.116 Run status group 0 (all jobs): 00:43:38.116 READ: bw=68.4MiB/s (71.7MB/s), 21.6MiB/s-25.2MiB/s (22.7MB/s-26.4MB/s), io=345MiB (362MB), run=5005-5047msec 00:43:38.374 ----------------------------------------------------- 00:43:38.374 Suppressions used: 00:43:38.374 count bytes template 00:43:38.374 5 44 /usr/src/fio/parse.c 00:43:38.374 1 8 libtcmalloc_minimal.so 00:43:38.374 1 904 libcrypto.so 00:43:38.374 ----------------------------------------------------- 00:43:38.374 00:43:38.374 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:38.374 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:38.374 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:38.374 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:38.374 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:38.374 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:38.374 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.374 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 bdev_null0 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 [2024-10-13 20:13:28.047918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 bdev_null1 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 bdev_null2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:38.375 { 00:43:38.375 "params": { 00:43:38.375 "name": "Nvme$subsystem", 00:43:38.375 "trtype": "$TEST_TRANSPORT", 00:43:38.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:38.375 "adrfam": "ipv4", 00:43:38.375 "trsvcid": "$NVMF_PORT", 00:43:38.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:38.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:38.375 "hdgst": ${hdgst:-false}, 00:43:38.375 "ddgst": ${ddgst:-false} 00:43:38.375 }, 00:43:38.375 "method": "bdev_nvme_attach_controller" 00:43:38.375 } 00:43:38.375 EOF 00:43:38.375 )") 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:38.375 { 00:43:38.375 "params": { 00:43:38.375 "name": "Nvme$subsystem", 00:43:38.375 "trtype": "$TEST_TRANSPORT", 00:43:38.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:38.375 "adrfam": "ipv4", 00:43:38.375 "trsvcid": "$NVMF_PORT", 00:43:38.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:38.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:38.375 "hdgst": ${hdgst:-false}, 00:43:38.375 "ddgst": ${ddgst:-false} 00:43:38.375 }, 00:43:38.375 "method": "bdev_nvme_attach_controller" 00:43:38.375 } 00:43:38.375 EOF 00:43:38.375 )") 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:38.375 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:38.376 { 00:43:38.376 "params": { 00:43:38.376 "name": "Nvme$subsystem", 00:43:38.376 "trtype": "$TEST_TRANSPORT", 00:43:38.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:38.376 "adrfam": "ipv4", 00:43:38.376 "trsvcid": "$NVMF_PORT", 00:43:38.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:38.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:38.376 "hdgst": ${hdgst:-false}, 00:43:38.376 "ddgst": ${ddgst:-false} 00:43:38.376 }, 00:43:38.376 "method": "bdev_nvme_attach_controller" 00:43:38.376 } 00:43:38.376 EOF 00:43:38.376 )") 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:38.376 "params": { 00:43:38.376 "name": "Nvme0", 00:43:38.376 "trtype": "tcp", 00:43:38.376 "traddr": "10.0.0.2", 00:43:38.376 "adrfam": "ipv4", 00:43:38.376 "trsvcid": "4420", 00:43:38.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:38.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:38.376 "hdgst": false, 00:43:38.376 "ddgst": false 00:43:38.376 }, 00:43:38.376 "method": "bdev_nvme_attach_controller" 00:43:38.376 },{ 00:43:38.376 "params": { 00:43:38.376 "name": "Nvme1", 00:43:38.376 "trtype": "tcp", 00:43:38.376 "traddr": "10.0.0.2", 00:43:38.376 "adrfam": "ipv4", 00:43:38.376 "trsvcid": "4420", 00:43:38.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:38.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:38.376 "hdgst": false, 00:43:38.376 "ddgst": false 00:43:38.376 }, 00:43:38.376 "method": "bdev_nvme_attach_controller" 00:43:38.376 },{ 00:43:38.376 "params": { 00:43:38.376 "name": "Nvme2", 00:43:38.376 "trtype": "tcp", 00:43:38.376 "traddr": "10.0.0.2", 00:43:38.376 "adrfam": "ipv4", 00:43:38.376 "trsvcid": "4420", 00:43:38.376 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:38.376 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:38.376 "hdgst": false, 00:43:38.376 "ddgst": false 00:43:38.376 }, 00:43:38.376 "method": "bdev_nvme_attach_controller" 00:43:38.376 }' 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:38.376 20:13:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:38.633 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:38.633 ... 00:43:38.633 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:38.633 ... 00:43:38.633 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:38.633 ... 00:43:38.633 fio-3.35 00:43:38.633 Starting 24 threads 00:43:50.838 00:43:50.838 filename0: (groupid=0, jobs=1): err= 0: pid=3230422: Sun Oct 13 20:13:39 2024 00:43:50.838 read: IOPS=336, BW=1346KiB/s (1378kB/s)(13.2MiB/10036msec) 00:43:50.838 slat (nsec): min=9948, max=83174, avg=26851.61, stdev=7781.34 00:43:50.838 clat (msec): min=7, max=217, avg=47.30, stdev=18.65 00:43:50.838 lat (msec): min=7, max=217, avg=47.33, stdev=18.65 00:43:50.838 clat percentiles (msec): 00:43:50.838 | 1.00th=[ 26], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.838 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:43:50.838 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.838 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 215], 99.95th=[ 218], 00:43:50.838 | 99.99th=[ 218] 00:43:50.838 bw ( KiB/s): min= 384, max= 1536, per=4.18%, avg=1333.95, stdev=287.11, samples=19 00:43:50.838 iops : min= 96, max= 384, avg=333.47, stdev=71.80, samples=19 00:43:50.838 lat (msec) : 10=0.47%, 50=95.73%, 100=0.65%, 250=3.14% 00:43:50.838 cpu : usr=98.06%, sys=1.46%, ctx=14, majf=0, minf=1632 00:43:50.838 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:50.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.838 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.838 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.838 filename0: (groupid=0, jobs=1): err= 0: pid=3230423: Sun Oct 13 20:13:39 2024 00:43:50.838 read: IOPS=331, BW=1324KiB/s (1356kB/s)(12.9MiB/10006msec) 00:43:50.838 slat (nsec): min=10847, max=89380, avg=35819.68, stdev=9169.18 00:43:50.838 clat (msec): min=35, max=289, avg=48.01, stdev=23.28 00:43:50.838 lat (msec): min=35, max=290, avg=48.05, stdev=23.28 00:43:50.838 clat percentiles (msec): 00:43:50.838 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.838 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:43:50.838 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.838 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 292], 99.95th=[ 292], 00:43:50.838 | 99.99th=[ 292] 00:43:50.838 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1313.68, stdev=356.57, samples=19 00:43:50.838 iops : min= 64, max= 384, avg=328.42, stdev=89.14, samples=19 00:43:50.838 lat (msec) : 50=96.56%, 100=0.54%, 250=2.42%, 500=0.48% 00:43:50.838 cpu : usr=97.60%, sys=1.68%, ctx=67, majf=0, minf=1633 00:43:50.838 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:50.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.838 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.838 issued rwts: total=3312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.838 filename0: (groupid=0, jobs=1): err= 0: pid=3230424: Sun Oct 13 20:13:39 2024 00:43:50.838 read: IOPS=331, BW=1324KiB/s (1356kB/s)(12.9MiB/10005msec) 00:43:50.838 slat (nsec): min=4841, max=86042, avg=31253.45, stdev=12975.78 00:43:50.838 clat (msec): min=38, max=185, avg=48.06, stdev=19.68 00:43:50.838 lat (msec): min=38, max=185, avg=48.09, stdev=19.68 00:43:50.838 clat percentiles (msec): 00:43:50.838 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.838 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:43:50.838 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.838 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 186], 99.95th=[ 186], 00:43:50.838 | 99.99th=[ 186] 00:43:50.838 bw ( KiB/s): min= 384, max= 1536, per=4.12%, avg=1313.68, stdev=338.23, samples=19 00:43:50.838 iops : min= 96, max= 384, avg=328.42, stdev=84.56, samples=19 00:43:50.838 lat (msec) : 50=96.07%, 100=0.54%, 250=3.38% 00:43:50.838 cpu : usr=97.32%, sys=1.79%, ctx=50, majf=0, minf=1632 00:43:50.838 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.838 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.838 issued rwts: total=3312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename0: (groupid=0, jobs=1): err= 0: pid=3230425: Sun Oct 13 20:13:39 2024 00:43:50.839 read: IOPS=332, BW=1329KiB/s (1361kB/s)(13.0MiB/10017msec) 00:43:50.839 slat (usec): min=8, max=120, avg=65.07, stdev=11.15 00:43:50.839 clat (msec): min=28, max=195, avg=47.56, stdev=19.80 00:43:50.839 lat (msec): min=28, max=195, avg=47.63, stdev=19.79 00:43:50.839 clat percentiles (msec): 00:43:50.839 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.839 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:43:50.839 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.839 | 99.00th=[ 161], 99.50th=[ 178], 99.90th=[ 197], 99.95th=[ 197], 00:43:50.839 | 99.99th=[ 197] 00:43:50.839 bw ( KiB/s): min= 368, max= 1536, per=4.14%, avg=1320.42, stdev=336.14, samples=19 00:43:50.839 iops : min= 92, max= 384, avg=330.11, stdev=84.04, samples=19 00:43:50.839 lat (msec) : 50=95.97%, 100=0.78%, 250=3.25% 00:43:50.839 cpu : usr=96.44%, sys=2.28%, ctx=137, majf=0, minf=1631 00:43:50.839 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename0: (groupid=0, jobs=1): err= 0: pid=3230426: Sun Oct 13 20:13:39 2024 00:43:50.839 read: IOPS=332, BW=1330KiB/s (1362kB/s)(13.0MiB/10012msec) 00:43:50.839 slat (nsec): min=9648, max=92045, avg=36216.19, stdev=10217.27 00:43:50.839 clat (msec): min=29, max=196, avg=47.82, stdev=19.72 00:43:50.839 lat (msec): min=29, max=196, avg=47.85, stdev=19.72 00:43:50.839 clat percentiles (msec): 00:43:50.839 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.839 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.839 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.839 | 99.00th=[ 161], 99.50th=[ 171], 99.90th=[ 197], 99.95th=[ 197], 00:43:50.839 | 99.99th=[ 197] 00:43:50.839 bw ( KiB/s): min= 368, max= 1536, per=4.14%, avg=1320.42, stdev=336.14, samples=19 00:43:50.839 iops : min= 92, max= 384, avg=330.11, stdev=84.04, samples=19 00:43:50.839 lat (msec) : 50=96.21%, 100=0.54%, 250=3.25% 00:43:50.839 cpu : usr=97.06%, sys=1.91%, ctx=163, majf=0, minf=1631 00:43:50.839 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename0: (groupid=0, jobs=1): err= 0: pid=3230427: Sun Oct 13 20:13:39 2024 00:43:50.839 read: IOPS=331, BW=1325KiB/s (1357kB/s)(13.0MiB/10016msec) 00:43:50.839 slat (nsec): min=12557, max=78513, avg=34432.35, stdev=10226.82 00:43:50.839 clat (msec): min=17, max=353, avg=48.02, stdev=26.71 00:43:50.839 lat (msec): min=17, max=353, avg=48.05, stdev=26.71 00:43:50.839 clat percentiles (msec): 00:43:50.839 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.839 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.839 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:43:50.839 | 99.00th=[ 159], 99.50th=[ 218], 99.90th=[ 355], 99.95th=[ 355], 00:43:50.839 | 99.99th=[ 355] 00:43:50.839 bw ( KiB/s): min= 240, max= 1536, per=4.11%, avg=1309.47, stdev=355.83, samples=19 00:43:50.839 iops : min= 60, max= 384, avg=327.37, stdev=88.96, samples=19 00:43:50.839 lat (msec) : 20=0.48%, 50=96.11%, 100=0.63%, 250=2.29%, 500=0.48% 00:43:50.839 cpu : usr=97.60%, sys=1.62%, ctx=33, majf=0, minf=1633 00:43:50.839 IO depths : 1=2.7%, 2=8.9%, 4=24.8%, 8=53.9%, 16=9.8%, 32=0.0%, >=64=0.0% 00:43:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 issued rwts: total=3318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename0: (groupid=0, jobs=1): err= 0: pid=3230428: Sun Oct 13 20:13:39 2024 00:43:50.839 read: IOPS=332, BW=1329KiB/s (1360kB/s)(13.0MiB/10014msec) 00:43:50.839 slat (usec): min=12, max=104, avg=62.95, stdev= 9.86 00:43:50.839 clat (msec): min=17, max=300, avg=47.60, stdev=22.93 00:43:50.839 lat (msec): min=17, max=300, avg=47.66, stdev=22.93 00:43:50.839 clat percentiles (msec): 00:43:50.839 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.839 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:43:50.839 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.839 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 279], 99.95th=[ 300], 00:43:50.839 | 99.99th=[ 300] 00:43:50.839 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1312.84, stdev=356.36, samples=19 00:43:50.839 iops : min= 64, max= 384, avg=328.21, stdev=89.09, samples=19 00:43:50.839 lat (msec) : 20=0.48%, 50=95.97%, 100=0.66%, 250=2.41%, 500=0.48% 00:43:50.839 cpu : usr=95.75%, sys=2.48%, ctx=199, majf=0, minf=1633 00:43:50.839 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:43:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 issued rwts: total=3326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename0: (groupid=0, jobs=1): err= 0: pid=3230429: Sun Oct 13 20:13:39 2024 00:43:50.839 read: IOPS=332, BW=1329KiB/s (1361kB/s)(13.0MiB/10018msec) 00:43:50.839 slat (usec): min=14, max=106, avg=42.87, stdev=13.82 00:43:50.839 clat (msec): min=17, max=304, avg=47.77, stdev=23.17 00:43:50.839 lat (msec): min=17, max=304, avg=47.81, stdev=23.17 00:43:50.839 clat percentiles (msec): 00:43:50.839 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.839 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.839 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.839 | 99.00th=[ 155], 99.50th=[ 182], 99.90th=[ 284], 99.95th=[ 305], 00:43:50.839 | 99.99th=[ 305] 00:43:50.839 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1313.68, stdev=356.57, samples=19 00:43:50.839 iops : min= 64, max= 384, avg=328.42, stdev=89.14, samples=19 00:43:50.839 lat (msec) : 20=0.48%, 50=96.09%, 100=0.54%, 250=2.40%, 500=0.48% 00:43:50.839 cpu : usr=97.24%, sys=1.96%, ctx=51, majf=0, minf=1633 00:43:50.839 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename1: (groupid=0, jobs=1): err= 0: pid=3230430: Sun Oct 13 20:13:39 2024 00:43:50.839 read: IOPS=331, BW=1324KiB/s (1356kB/s)(12.9MiB/10004msec) 00:43:50.839 slat (usec): min=6, max=105, avg=42.77, stdev=19.73 00:43:50.839 clat (msec): min=42, max=185, avg=47.95, stdev=19.77 00:43:50.839 lat (msec): min=42, max=185, avg=47.99, stdev=19.77 00:43:50.839 clat percentiles (msec): 00:43:50.839 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.839 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.839 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.839 | 99.00th=[ 155], 99.50th=[ 182], 99.90th=[ 186], 99.95th=[ 186], 00:43:50.839 | 99.99th=[ 186] 00:43:50.839 bw ( KiB/s): min= 384, max= 1536, per=4.12%, avg=1313.68, stdev=338.23, samples=19 00:43:50.839 iops : min= 96, max= 384, avg=328.42, stdev=84.56, samples=19 00:43:50.839 lat (msec) : 50=96.14%, 100=0.48%, 250=3.38% 00:43:50.839 cpu : usr=97.35%, sys=1.72%, ctx=43, majf=0, minf=1632 00:43:50.839 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 issued rwts: total=3312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename1: (groupid=0, jobs=1): err= 0: pid=3230431: Sun Oct 13 20:13:39 2024 00:43:50.839 read: IOPS=331, BW=1328KiB/s (1360kB/s)(13.0MiB/10025msec) 00:43:50.839 slat (usec): min=6, max=145, avg=48.14, stdev=19.74 00:43:50.839 clat (msec): min=27, max=161, avg=47.83, stdev=18.48 00:43:50.839 lat (msec): min=27, max=161, avg=47.88, stdev=18.48 00:43:50.839 clat percentiles (msec): 00:43:50.839 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.839 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.839 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.839 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 161], 00:43:50.839 | 99.99th=[ 161] 00:43:50.839 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1320.42, stdev=319.57, samples=19 00:43:50.839 iops : min= 96, max= 384, avg=330.11, stdev=79.89, samples=19 00:43:50.839 lat (msec) : 50=95.49%, 100=1.14%, 250=3.37% 00:43:50.839 cpu : usr=97.16%, sys=1.87%, ctx=86, majf=0, minf=1634 00:43:50.839 IO depths : 1=2.9%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:43:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename1: (groupid=0, jobs=1): err= 0: pid=3230432: Sun Oct 13 20:13:39 2024 00:43:50.839 read: IOPS=332, BW=1330KiB/s (1362kB/s)(13.0MiB/10009msec) 00:43:50.839 slat (usec): min=6, max=119, avg=36.42, stdev= 9.10 00:43:50.839 clat (msec): min=29, max=209, avg=47.80, stdev=18.26 00:43:50.839 lat (msec): min=29, max=209, avg=47.84, stdev=18.26 00:43:50.839 clat percentiles (msec): 00:43:50.839 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.839 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:43:50.839 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.839 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 211], 00:43:50.839 | 99.99th=[ 211] 00:43:50.839 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1320.42, stdev=319.44, samples=19 00:43:50.839 iops : min= 96, max= 384, avg=330.11, stdev=79.86, samples=19 00:43:50.839 lat (msec) : 50=95.61%, 100=1.08%, 250=3.31% 00:43:50.839 cpu : usr=97.77%, sys=1.59%, ctx=47, majf=0, minf=1632 00:43:50.839 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.839 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.839 filename1: (groupid=0, jobs=1): err= 0: pid=3230433: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=332, BW=1329KiB/s (1361kB/s)(13.0MiB/10019msec) 00:43:50.840 slat (nsec): min=6253, max=96780, avg=39448.27, stdev=12618.27 00:43:50.840 clat (msec): min=28, max=195, avg=47.78, stdev=19.54 00:43:50.840 lat (msec): min=28, max=195, avg=47.81, stdev=19.54 00:43:50.840 clat percentiles (msec): 00:43:50.840 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.840 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.840 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.840 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 174], 99.95th=[ 197], 00:43:50.840 | 99.99th=[ 197] 00:43:50.840 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1320.42, stdev=336.10, samples=19 00:43:50.840 iops : min= 96, max= 384, avg=330.11, stdev=84.03, samples=19 00:43:50.840 lat (msec) : 50=96.09%, 100=0.60%, 250=3.31% 00:43:50.840 cpu : usr=98.01%, sys=1.48%, ctx=15, majf=0, minf=1633 00:43:50.840 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.840 filename1: (groupid=0, jobs=1): err= 0: pid=3230434: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=332, BW=1328KiB/s (1360kB/s)(13.0MiB/10024msec) 00:43:50.840 slat (nsec): min=11232, max=93767, avg=25784.63, stdev=10702.75 00:43:50.840 clat (msec): min=29, max=196, avg=47.98, stdev=18.61 00:43:50.840 lat (msec): min=29, max=196, avg=48.00, stdev=18.61 00:43:50.840 clat percentiles (msec): 00:43:50.840 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.840 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:50.840 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.840 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 161], 99.95th=[ 197], 00:43:50.840 | 99.99th=[ 197] 00:43:50.840 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1320.42, stdev=321.26, samples=19 00:43:50.840 iops : min= 96, max= 384, avg=330.11, stdev=80.31, samples=19 00:43:50.840 lat (msec) : 50=95.67%, 100=0.96%, 250=3.37% 00:43:50.840 cpu : usr=98.06%, sys=1.41%, ctx=27, majf=0, minf=1634 00:43:50.840 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.840 filename1: (groupid=0, jobs=1): err= 0: pid=3230435: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=331, BW=1324KiB/s (1356kB/s)(12.9MiB/10006msec) 00:43:50.840 slat (usec): min=9, max=105, avg=38.05, stdev=13.06 00:43:50.840 clat (msec): min=35, max=288, avg=47.99, stdev=23.20 00:43:50.840 lat (msec): min=35, max=289, avg=48.03, stdev=23.20 00:43:50.840 clat percentiles (msec): 00:43:50.840 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.840 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:43:50.840 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.840 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 288], 99.95th=[ 288], 00:43:50.840 | 99.99th=[ 288] 00:43:50.840 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1313.68, stdev=356.57, samples=19 00:43:50.840 iops : min= 64, max= 384, avg=328.42, stdev=89.14, samples=19 00:43:50.840 lat (msec) : 50=96.62%, 100=0.48%, 250=2.42%, 500=0.48% 00:43:50.840 cpu : usr=97.67%, sys=1.71%, ctx=41, majf=0, minf=1631 00:43:50.840 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 issued rwts: total=3312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.840 filename1: (groupid=0, jobs=1): err= 0: pid=3230436: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=332, BW=1329KiB/s (1361kB/s)(13.0MiB/10016msec) 00:43:50.840 slat (usec): min=12, max=106, avg=46.89, stdev=15.97 00:43:50.840 clat (msec): min=18, max=353, avg=47.78, stdev=24.34 00:43:50.840 lat (msec): min=18, max=353, avg=47.83, stdev=24.34 00:43:50.840 clat percentiles (msec): 00:43:50.840 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.840 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.840 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.840 | 99.00th=[ 161], 99.50th=[ 180], 99.90th=[ 300], 99.95th=[ 355], 00:43:50.840 | 99.99th=[ 355] 00:43:50.840 bw ( KiB/s): min= 240, max= 1536, per=4.12%, avg=1313.68, stdev=356.73, samples=19 00:43:50.840 iops : min= 60, max= 384, avg=328.42, stdev=89.18, samples=19 00:43:50.840 lat (msec) : 20=0.48%, 50=96.03%, 100=0.66%, 250=2.34%, 500=0.48% 00:43:50.840 cpu : usr=97.37%, sys=1.78%, ctx=87, majf=0, minf=1633 00:43:50.840 IO depths : 1=2.6%, 2=8.8%, 4=25.0%, 8=53.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:43:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.840 filename1: (groupid=0, jobs=1): err= 0: pid=3230437: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=331, BW=1325KiB/s (1356kB/s)(12.9MiB/10002msec) 00:43:50.840 slat (nsec): min=6602, max=64223, avg=35020.27, stdev=8329.74 00:43:50.840 clat (msec): min=35, max=286, avg=48.00, stdev=23.17 00:43:50.840 lat (msec): min=35, max=286, avg=48.03, stdev=23.17 00:43:50.840 clat percentiles (msec): 00:43:50.840 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.840 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.840 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.840 | 99.00th=[ 155], 99.50th=[ 184], 99.90th=[ 288], 99.95th=[ 288], 00:43:50.840 | 99.99th=[ 288] 00:43:50.840 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1313.68, stdev=356.57, samples=19 00:43:50.840 iops : min= 64, max= 384, avg=328.42, stdev=89.14, samples=19 00:43:50.840 lat (msec) : 50=96.62%, 100=0.48%, 250=2.42%, 500=0.48% 00:43:50.840 cpu : usr=97.85%, sys=1.63%, ctx=24, majf=0, minf=1633 00:43:50.840 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 issued rwts: total=3312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.840 filename2: (groupid=0, jobs=1): err= 0: pid=3230438: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=332, BW=1329KiB/s (1361kB/s)(13.0MiB/10018msec) 00:43:50.840 slat (usec): min=16, max=109, avg=48.78, stdev=16.14 00:43:50.840 clat (msec): min=17, max=284, avg=47.72, stdev=23.01 00:43:50.840 lat (msec): min=17, max=284, avg=47.76, stdev=23.01 00:43:50.840 clat percentiles (msec): 00:43:50.840 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.840 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.840 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.840 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 284], 99.95th=[ 284], 00:43:50.840 | 99.99th=[ 284] 00:43:50.840 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1313.68, stdev=356.57, samples=19 00:43:50.840 iops : min= 64, max= 384, avg=328.42, stdev=89.14, samples=19 00:43:50.840 lat (msec) : 20=0.48%, 50=96.09%, 100=0.54%, 250=2.40%, 500=0.48% 00:43:50.840 cpu : usr=95.25%, sys=2.88%, ctx=304, majf=0, minf=1633 00:43:50.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.840 filename2: (groupid=0, jobs=1): err= 0: pid=3230439: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=332, BW=1330KiB/s (1362kB/s)(13.0MiB/10012msec) 00:43:50.840 slat (nsec): min=8415, max=83323, avg=37411.28, stdev=8671.26 00:43:50.840 clat (msec): min=29, max=196, avg=47.80, stdev=19.66 00:43:50.840 lat (msec): min=29, max=196, avg=47.84, stdev=19.66 00:43:50.840 clat percentiles (msec): 00:43:50.840 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.840 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.840 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.840 | 99.00th=[ 161], 99.50th=[ 171], 99.90th=[ 197], 99.95th=[ 197], 00:43:50.840 | 99.99th=[ 197] 00:43:50.840 bw ( KiB/s): min= 368, max= 1536, per=4.14%, avg=1320.42, stdev=336.14, samples=19 00:43:50.840 iops : min= 92, max= 384, avg=330.11, stdev=84.04, samples=19 00:43:50.840 lat (msec) : 50=96.15%, 100=0.54%, 250=3.31% 00:43:50.840 cpu : usr=97.34%, sys=1.78%, ctx=84, majf=0, minf=1635 00:43:50.840 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.840 filename2: (groupid=0, jobs=1): err= 0: pid=3230440: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=335, BW=1343KiB/s (1375kB/s)(13.1MiB/10006msec) 00:43:50.840 slat (usec): min=5, max=101, avg=32.52, stdev=10.04 00:43:50.840 clat (msec): min=7, max=216, avg=47.34, stdev=18.62 00:43:50.840 lat (msec): min=7, max=216, avg=47.38, stdev=18.62 00:43:50.840 clat percentiles (msec): 00:43:50.840 | 1.00th=[ 26], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.840 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.840 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.840 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 213], 99.95th=[ 218], 00:43:50.840 | 99.99th=[ 218] 00:43:50.840 bw ( KiB/s): min= 384, max= 1536, per=4.18%, avg=1333.89, stdev=287.22, samples=19 00:43:50.840 iops : min= 96, max= 384, avg=333.47, stdev=71.80, samples=19 00:43:50.840 lat (msec) : 10=0.48%, 50=95.71%, 100=0.65%, 250=3.15% 00:43:50.840 cpu : usr=97.52%, sys=1.65%, ctx=111, majf=0, minf=1633 00:43:50.840 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.840 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.840 filename2: (groupid=0, jobs=1): err= 0: pid=3230441: Sun Oct 13 20:13:39 2024 00:43:50.840 read: IOPS=332, BW=1329KiB/s (1360kB/s)(13.0MiB/10020msec) 00:43:50.841 slat (usec): min=6, max=147, avg=41.35, stdev=15.61 00:43:50.841 clat (msec): min=28, max=195, avg=47.81, stdev=18.47 00:43:50.841 lat (msec): min=28, max=195, avg=47.85, stdev=18.47 00:43:50.841 clat percentiles (msec): 00:43:50.841 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.841 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.841 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.841 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 197], 00:43:50.841 | 99.99th=[ 197] 00:43:50.841 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1320.42, stdev=319.44, samples=19 00:43:50.841 iops : min= 96, max= 384, avg=330.11, stdev=79.86, samples=19 00:43:50.841 lat (msec) : 50=95.61%, 100=1.08%, 250=3.31% 00:43:50.841 cpu : usr=97.18%, sys=1.68%, ctx=120, majf=0, minf=1634 00:43:50.841 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.841 filename2: (groupid=0, jobs=1): err= 0: pid=3230442: Sun Oct 13 20:13:39 2024 00:43:50.841 read: IOPS=332, BW=1329KiB/s (1361kB/s)(13.0MiB/10016msec) 00:43:50.841 slat (nsec): min=13024, max=66540, avg=34065.76, stdev=7613.21 00:43:50.841 clat (msec): min=17, max=301, avg=47.84, stdev=24.10 00:43:50.841 lat (msec): min=17, max=301, avg=47.88, stdev=24.10 00:43:50.841 clat percentiles (msec): 00:43:50.841 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.841 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:43:50.841 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.841 | 99.00th=[ 161], 99.50th=[ 197], 99.90th=[ 300], 99.95th=[ 300], 00:43:50.841 | 99.99th=[ 300] 00:43:50.841 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1313.68, stdev=356.57, samples=19 00:43:50.841 iops : min= 64, max= 384, avg=328.42, stdev=89.14, samples=19 00:43:50.841 lat (msec) : 20=0.48%, 50=96.21%, 100=0.48%, 250=2.34%, 500=0.48% 00:43:50.841 cpu : usr=97.88%, sys=1.60%, ctx=20, majf=0, minf=1631 00:43:50.841 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.841 filename2: (groupid=0, jobs=1): err= 0: pid=3230443: Sun Oct 13 20:13:39 2024 00:43:50.841 read: IOPS=332, BW=1330KiB/s (1362kB/s)(13.0MiB/10012msec) 00:43:50.841 slat (usec): min=4, max=114, avg=63.94, stdev=10.32 00:43:50.841 clat (msec): min=28, max=171, avg=47.55, stdev=19.46 00:43:50.841 lat (msec): min=28, max=171, avg=47.62, stdev=19.46 00:43:50.841 clat percentiles (msec): 00:43:50.841 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.841 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:43:50.841 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.841 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 171], 99.95th=[ 171], 00:43:50.841 | 99.99th=[ 171] 00:43:50.841 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1320.42, stdev=336.10, samples=19 00:43:50.841 iops : min= 96, max= 384, avg=330.11, stdev=84.03, samples=19 00:43:50.841 lat (msec) : 50=96.09%, 100=0.54%, 250=3.37% 00:43:50.841 cpu : usr=97.18%, sys=1.82%, ctx=65, majf=0, minf=1635 00:43:50.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:50.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.841 filename2: (groupid=0, jobs=1): err= 0: pid=3230444: Sun Oct 13 20:13:39 2024 00:43:50.841 read: IOPS=332, BW=1329KiB/s (1361kB/s)(13.0MiB/10018msec) 00:43:50.841 slat (usec): min=8, max=121, avg=67.94, stdev=13.25 00:43:50.841 clat (msec): min=42, max=154, avg=47.54, stdev=17.94 00:43:50.841 lat (msec): min=42, max=154, avg=47.61, stdev=17.93 00:43:50.841 clat percentiles (msec): 00:43:50.841 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.841 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:43:50.841 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.841 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:43:50.841 | 99.99th=[ 155] 00:43:50.841 bw ( KiB/s): min= 384, max= 1536, per=4.14%, avg=1320.42, stdev=319.44, samples=19 00:43:50.841 iops : min= 96, max= 384, avg=330.11, stdev=79.86, samples=19 00:43:50.841 lat (msec) : 50=95.67%, 100=0.96%, 250=3.37% 00:43:50.841 cpu : usr=95.68%, sys=2.44%, ctx=162, majf=0, minf=1633 00:43:50.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:50.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.841 filename2: (groupid=0, jobs=1): err= 0: pid=3230445: Sun Oct 13 20:13:39 2024 00:43:50.841 read: IOPS=339, BW=1356KiB/s (1389kB/s)(13.2MiB/10003msec) 00:43:50.841 slat (nsec): min=6127, max=96732, avg=25971.18, stdev=9992.59 00:43:50.841 clat (msec): min=6, max=196, avg=46.95, stdev=17.70 00:43:50.841 lat (msec): min=6, max=196, avg=46.98, stdev=17.70 00:43:50.841 clat percentiles (msec): 00:43:50.841 | 1.00th=[ 16], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:43:50.841 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:43:50.841 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:50.841 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 197], 99.95th=[ 197], 00:43:50.841 | 99.99th=[ 197] 00:43:50.841 bw ( KiB/s): min= 512, max= 1536, per=4.23%, avg=1347.37, stdev=250.13, samples=19 00:43:50.841 iops : min= 128, max= 384, avg=336.84, stdev=62.53, samples=19 00:43:50.841 lat (msec) : 10=0.94%, 20=0.94%, 50=93.81%, 100=1.59%, 250=2.71% 00:43:50.841 cpu : usr=97.76%, sys=1.65%, ctx=30, majf=0, minf=1636 00:43:50.841 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:50.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.841 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:50.841 00:43:50.841 Run status group 0 (all jobs): 00:43:50.841 READ: bw=31.1MiB/s (32.6MB/s), 1324KiB/s-1356KiB/s (1356kB/s-1389kB/s), io=312MiB (327MB), run=10002-10036msec 00:43:51.099 ----------------------------------------------------- 00:43:51.099 Suppressions used: 00:43:51.099 count bytes template 00:43:51.099 45 402 /usr/src/fio/parse.c 00:43:51.099 1 8 libtcmalloc_minimal.so 00:43:51.099 1 904 libcrypto.so 00:43:51.099 ----------------------------------------------------- 00:43:51.099 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 bdev_null0 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.359 [2024-10-13 20:13:41.018729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:51.359 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.360 bdev_null1 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:51.360 { 00:43:51.360 "params": { 00:43:51.360 "name": "Nvme$subsystem", 00:43:51.360 "trtype": "$TEST_TRANSPORT", 00:43:51.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:51.360 "adrfam": "ipv4", 00:43:51.360 "trsvcid": "$NVMF_PORT", 00:43:51.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:51.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:51.360 "hdgst": ${hdgst:-false}, 00:43:51.360 "ddgst": ${ddgst:-false} 00:43:51.360 }, 00:43:51.360 "method": "bdev_nvme_attach_controller" 00:43:51.360 } 00:43:51.360 EOF 00:43:51.360 )") 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:51.360 { 00:43:51.360 "params": { 00:43:51.360 "name": "Nvme$subsystem", 00:43:51.360 "trtype": "$TEST_TRANSPORT", 00:43:51.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:51.360 "adrfam": "ipv4", 00:43:51.360 "trsvcid": "$NVMF_PORT", 00:43:51.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:51.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:51.360 "hdgst": ${hdgst:-false}, 00:43:51.360 "ddgst": ${ddgst:-false} 00:43:51.360 }, 00:43:51.360 "method": "bdev_nvme_attach_controller" 00:43:51.360 } 00:43:51.360 EOF 00:43:51.360 )") 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:51.360 "params": { 00:43:51.360 "name": "Nvme0", 00:43:51.360 "trtype": "tcp", 00:43:51.360 "traddr": "10.0.0.2", 00:43:51.360 "adrfam": "ipv4", 00:43:51.360 "trsvcid": "4420", 00:43:51.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:51.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:51.360 "hdgst": false, 00:43:51.360 "ddgst": false 00:43:51.360 }, 00:43:51.360 "method": "bdev_nvme_attach_controller" 00:43:51.360 },{ 00:43:51.360 "params": { 00:43:51.360 "name": "Nvme1", 00:43:51.360 "trtype": "tcp", 00:43:51.360 "traddr": "10.0.0.2", 00:43:51.360 "adrfam": "ipv4", 00:43:51.360 "trsvcid": "4420", 00:43:51.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:51.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:51.360 "hdgst": false, 00:43:51.360 "ddgst": false 00:43:51.360 }, 00:43:51.360 "method": "bdev_nvme_attach_controller" 00:43:51.360 }' 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:51.360 20:13:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:51.618 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:51.618 ... 00:43:51.618 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:51.618 ... 00:43:51.618 fio-3.35 00:43:51.618 Starting 4 threads 00:43:58.175 00:43:58.175 filename0: (groupid=0, jobs=1): err= 0: pid=3231950: Sun Oct 13 20:13:47 2024 00:43:58.175 read: IOPS=1464, BW=11.4MiB/s (12.0MB/s)(57.2MiB/5001msec) 00:43:58.175 slat (usec): min=6, max=103, avg=27.98, stdev= 9.79 00:43:58.175 clat (usec): min=1143, max=14140, avg=5352.61, stdev=586.63 00:43:58.175 lat (usec): min=1170, max=14164, avg=5380.59, stdev=586.71 00:43:58.175 clat percentiles (usec): 00:43:58.175 | 1.00th=[ 3621], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 5145], 00:43:58.175 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:43:58.175 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5735], 00:43:58.175 | 99.00th=[ 7439], 99.50th=[ 8717], 99.90th=[13829], 99.95th=[13829], 00:43:58.175 | 99.99th=[14091] 00:43:58.175 bw ( KiB/s): min=11376, max=12048, per=25.07%, avg=11697.78, stdev=232.34, samples=9 00:43:58.175 iops : min= 1422, max= 1506, avg=1462.22, stdev=29.04, samples=9 00:43:58.175 lat (msec) : 2=0.16%, 4=1.26%, 10=98.44%, 20=0.14% 00:43:58.175 cpu : usr=90.38%, sys=6.44%, ctx=110, majf=0, minf=1635 00:43:58.175 IO depths : 1=1.4%, 2=22.6%, 4=51.9%, 8=24.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:58.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.175 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.175 issued rwts: total=7324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:58.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:58.175 filename0: (groupid=0, jobs=1): err= 0: pid=3231951: Sun Oct 13 20:13:47 2024 00:43:58.175 read: IOPS=1446, BW=11.3MiB/s (11.9MB/s)(56.5MiB/5003msec) 00:43:58.175 slat (nsec): min=6932, max=81370, avg=27580.34, stdev=11830.23 00:43:58.175 clat (usec): min=1284, max=13348, avg=5425.10, stdev=711.71 00:43:58.175 lat (usec): min=1302, max=13369, avg=5452.68, stdev=710.55 00:43:58.175 clat percentiles (usec): 00:43:58.175 | 1.00th=[ 3523], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5145], 00:43:58.175 | 30.00th=[ 5276], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:43:58.175 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 6063], 00:43:58.175 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[12911], 99.95th=[13042], 00:43:58.175 | 99.99th=[13304] 00:43:58.175 bw ( KiB/s): min=11024, max=12048, per=24.71%, avg=11533.22, stdev=351.32, samples=9 00:43:58.175 iops : min= 1378, max= 1506, avg=1441.56, stdev=43.92, samples=9 00:43:58.175 lat (msec) : 2=0.33%, 4=1.01%, 10=98.40%, 20=0.26% 00:43:58.175 cpu : usr=94.54%, sys=4.88%, ctx=7, majf=0, minf=1636 00:43:58.175 IO depths : 1=0.3%, 2=19.9%, 4=53.9%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:58.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.175 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.175 issued rwts: total=7238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:58.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:58.175 filename1: (groupid=0, jobs=1): err= 0: pid=3231952: Sun Oct 13 20:13:47 2024 00:43:58.175 read: IOPS=1473, BW=11.5MiB/s (12.1MB/s)(57.6MiB/5003msec) 00:43:58.175 slat (nsec): min=6598, max=72722, avg=18639.30, stdev=9410.68 00:43:58.175 clat (usec): min=1292, max=9706, avg=5373.72, stdev=443.28 00:43:58.175 lat (usec): min=1310, max=9727, avg=5392.36, stdev=443.65 00:43:58.175 clat percentiles (usec): 00:43:58.175 | 1.00th=[ 3851], 5.00th=[ 4817], 10.00th=[ 5080], 20.00th=[ 5211], 00:43:58.175 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:43:58.175 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 5866], 00:43:58.175 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 8455], 99.95th=[ 8979], 00:43:58.175 | 99.99th=[ 9765] 00:43:58.175 bw ( KiB/s): min=11360, max=12288, per=25.25%, avg=11781.33, stdev=298.05, samples=9 00:43:58.175 iops : min= 1420, max= 1536, avg=1472.67, stdev=37.26, samples=9 00:43:58.175 lat (msec) : 2=0.20%, 4=1.11%, 10=98.68% 00:43:58.175 cpu : usr=94.96%, sys=4.44%, ctx=8, majf=0, minf=1638 00:43:58.175 IO depths : 1=0.2%, 2=9.6%, 4=62.0%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:58.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.175 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.175 issued rwts: total=7371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:58.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:58.175 filename1: (groupid=0, jobs=1): err= 0: pid=3231954: Sun Oct 13 20:13:47 2024 00:43:58.175 read: IOPS=1449, BW=11.3MiB/s (11.9MB/s)(56.6MiB/5002msec) 00:43:58.175 slat (nsec): min=6998, max=80864, avg=27837.29, stdev=11910.56 00:43:58.175 clat (usec): min=955, max=12210, avg=5411.53, stdev=658.58 00:43:58.175 lat (usec): min=980, max=12232, avg=5439.37, stdev=657.69 00:43:58.175 clat percentiles (usec): 00:43:58.175 | 1.00th=[ 3163], 5.00th=[ 4948], 10.00th=[ 5080], 20.00th=[ 5145], 00:43:58.175 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:43:58.175 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 6063], 00:43:58.175 | 99.00th=[ 8356], 99.50th=[ 8979], 99.90th=[11863], 99.95th=[11863], 00:43:58.175 | 99.99th=[12256] 00:43:58.175 bw ( KiB/s): min=11056, max=11952, per=24.79%, avg=11569.78, stdev=330.76, samples=9 00:43:58.175 iops : min= 1382, max= 1494, avg=1446.22, stdev=41.35, samples=9 00:43:58.175 lat (usec) : 1000=0.01% 00:43:58.175 lat (msec) : 2=0.28%, 4=1.03%, 10=98.55%, 20=0.12% 00:43:58.175 cpu : usr=94.96%, sys=4.46%, ctx=6, majf=0, minf=1631 00:43:58.175 IO depths : 1=1.5%, 2=21.4%, 4=52.6%, 8=24.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:58.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.175 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.175 issued rwts: total=7251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:58.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:58.175 00:43:58.175 Run status group 0 (all jobs): 00:43:58.175 READ: bw=45.6MiB/s (47.8MB/s), 11.3MiB/s-11.5MiB/s (11.9MB/s-12.1MB/s), io=228MiB (239MB), run=5001-5003msec 00:43:59.109 ----------------------------------------------------- 00:43:59.109 Suppressions used: 00:43:59.109 count bytes template 00:43:59.109 6 52 /usr/src/fio/parse.c 00:43:59.109 1 8 libtcmalloc_minimal.so 00:43:59.109 1 904 libcrypto.so 00:43:59.109 ----------------------------------------------------- 00:43:59.109 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.109 00:43:59.109 real 0m27.884s 00:43:59.109 user 4m34.604s 00:43:59.109 sys 0m8.082s 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:59.109 20:13:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.109 ************************************ 00:43:59.109 END TEST fio_dif_rand_params 00:43:59.109 ************************************ 00:43:59.109 20:13:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:59.109 20:13:48 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:59.109 20:13:48 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:59.109 20:13:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:59.109 ************************************ 00:43:59.109 START TEST fio_dif_digest 00:43:59.109 ************************************ 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:59.109 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:59.110 bdev_null0 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:59.110 [2024-10-13 20:13:48.698642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:59.110 { 00:43:59.110 "params": { 00:43:59.110 "name": "Nvme$subsystem", 00:43:59.110 "trtype": "$TEST_TRANSPORT", 00:43:59.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:59.110 "adrfam": "ipv4", 00:43:59.110 "trsvcid": "$NVMF_PORT", 00:43:59.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:59.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:59.110 "hdgst": ${hdgst:-false}, 00:43:59.110 "ddgst": ${ddgst:-false} 00:43:59.110 }, 00:43:59.110 "method": "bdev_nvme_attach_controller" 00:43:59.110 } 00:43:59.110 EOF 00:43:59.110 )") 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:59.110 "params": { 00:43:59.110 "name": "Nvme0", 00:43:59.110 "trtype": "tcp", 00:43:59.110 "traddr": "10.0.0.2", 00:43:59.110 "adrfam": "ipv4", 00:43:59.110 "trsvcid": "4420", 00:43:59.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:59.110 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:59.110 "hdgst": true, 00:43:59.110 "ddgst": true 00:43:59.110 }, 00:43:59.110 "method": "bdev_nvme_attach_controller" 00:43:59.110 }' 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:59.110 20:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:59.368 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:59.368 ... 00:43:59.368 fio-3.35 00:43:59.368 Starting 3 threads 00:44:11.564 00:44:11.564 filename0: (groupid=0, jobs=1): err= 0: pid=3232937: Sun Oct 13 20:14:00 2024 00:44:11.564 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(208MiB/10007msec) 00:44:11.564 slat (nsec): min=6724, max=71118, avg=24446.61, stdev=5771.28 00:44:11.564 clat (usec): min=10700, max=24962, avg=18010.41, stdev=1239.12 00:44:11.564 lat (usec): min=10727, max=24981, avg=18034.86, stdev=1237.70 00:44:11.564 clat percentiles (usec): 00:44:11.564 | 1.00th=[15533], 5.00th=[16188], 10.00th=[16581], 20.00th=[17171], 00:44:11.564 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:44:11.564 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19530], 95.00th=[20317], 00:44:11.564 | 99.00th=[21627], 99.50th=[22152], 99.90th=[25035], 99.95th=[25035], 00:44:11.564 | 99.99th=[25035] 00:44:11.564 bw ( KiB/s): min=19200, max=22016, per=32.93%, avg=21273.60, stdev=714.00, samples=20 00:44:11.564 iops : min= 150, max= 172, avg=166.20, stdev= 5.58, samples=20 00:44:11.564 lat (msec) : 20=93.69%, 50=6.31% 00:44:11.564 cpu : usr=88.65%, sys=7.46%, ctx=424, majf=0, minf=1634 00:44:11.564 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.564 issued rwts: total=1664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.564 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:11.564 filename0: (groupid=0, jobs=1): err= 0: pid=3232938: Sun Oct 13 20:14:00 2024 00:44:11.564 read: IOPS=173, BW=21.6MiB/s (22.7MB/s)(217MiB/10049msec) 00:44:11.564 slat (nsec): min=6778, max=58648, avg=21761.08, stdev=5140.06 00:44:11.564 clat (usec): min=12600, max=55223, avg=17285.04, stdev=1751.82 00:44:11.564 lat (usec): min=12640, max=55247, avg=17306.80, stdev=1751.52 00:44:11.564 clat percentiles (usec): 00:44:11.564 | 1.00th=[14615], 5.00th=[15139], 10.00th=[15664], 20.00th=[16188], 00:44:11.564 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:44:11.564 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[19268], 00:44:11.564 | 99.00th=[20579], 99.50th=[20841], 99.90th=[50594], 99.95th=[55313], 00:44:11.564 | 99.99th=[55313] 00:44:11.564 bw ( KiB/s): min=20992, max=23296, per=34.39%, avg=22220.80, stdev=682.89, samples=20 00:44:11.564 iops : min= 164, max= 182, avg=173.60, stdev= 5.34, samples=20 00:44:11.564 lat (msec) : 20=97.81%, 50=2.07%, 100=0.12% 00:44:11.564 cpu : usr=94.40%, sys=5.04%, ctx=22, majf=0, minf=1634 00:44:11.564 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.564 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.564 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:11.564 filename0: (groupid=0, jobs=1): err= 0: pid=3232939: Sun Oct 13 20:14:00 2024 00:44:11.564 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(209MiB/10049msec) 00:44:11.564 slat (nsec): min=7656, max=49667, avg=21230.92, stdev=4372.18 00:44:11.564 clat (usec): min=13389, max=51749, avg=18011.45, stdev=1623.69 00:44:11.564 lat (usec): min=13413, max=51796, avg=18032.68, stdev=1623.91 00:44:11.564 clat percentiles (usec): 00:44:11.564 | 1.00th=[15270], 5.00th=[16188], 10.00th=[16581], 20.00th=[17171], 00:44:11.564 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:44:11.564 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19268], 95.00th=[20055], 00:44:11.564 | 99.00th=[21103], 99.50th=[21365], 99.90th=[48497], 99.95th=[51643], 00:44:11.564 | 99.99th=[51643] 00:44:11.564 bw ( KiB/s): min=20224, max=22272, per=33.01%, avg=21326.85, stdev=560.77, samples=20 00:44:11.564 iops : min= 158, max= 174, avg=166.60, stdev= 4.41, samples=20 00:44:11.564 lat (msec) : 20=95.39%, 50=4.55%, 100=0.06% 00:44:11.564 cpu : usr=94.21%, sys=5.22%, ctx=14, majf=0, minf=1636 00:44:11.564 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.564 issued rwts: total=1669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.564 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:11.564 00:44:11.564 Run status group 0 (all jobs): 00:44:11.564 READ: bw=63.1MiB/s (66.2MB/s), 20.8MiB/s-21.6MiB/s (21.8MB/s-22.7MB/s), io=634MiB (665MB), run=10007-10049msec 00:44:11.564 ----------------------------------------------------- 00:44:11.564 Suppressions used: 00:44:11.564 count bytes template 00:44:11.564 5 44 /usr/src/fio/parse.c 00:44:11.564 1 8 libtcmalloc_minimal.so 00:44:11.564 1 904 libcrypto.so 00:44:11.564 ----------------------------------------------------- 00:44:11.564 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:11.564 00:44:11.564 real 0m12.430s 00:44:11.564 user 0m30.112s 00:44:11.564 sys 0m2.218s 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:11.564 20:14:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:11.564 ************************************ 00:44:11.564 END TEST fio_dif_digest 00:44:11.564 ************************************ 00:44:11.564 20:14:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:11.564 20:14:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:11.564 rmmod nvme_tcp 00:44:11.564 rmmod nvme_fabrics 00:44:11.564 rmmod nvme_keyring 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 3226166 ']' 00:44:11.564 20:14:01 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 3226166 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3226166 ']' 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3226166 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3226166 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3226166' 00:44:11.564 killing process with pid 3226166 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3226166 00:44:11.564 20:14:01 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3226166 00:44:12.939 20:14:02 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:44:12.939 20:14:02 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:13.873 Waiting for block devices as requested 00:44:13.873 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:13.873 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:13.873 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:14.132 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:14.132 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:14.132 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:14.132 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:14.390 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:14.390 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:14.390 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:14.390 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:14.648 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:14.648 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:14.648 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:14.648 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:14.906 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:14.906 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:14.906 20:14:04 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:14.906 20:14:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:14.906 20:14:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:17.450 20:14:06 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:17.450 00:44:17.450 real 1m15.558s 00:44:17.450 user 6m43.352s 00:44:17.450 sys 0m19.603s 00:44:17.450 20:14:06 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:17.450 20:14:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:17.450 ************************************ 00:44:17.450 END TEST nvmf_dif 00:44:17.450 ************************************ 00:44:17.450 20:14:06 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:17.450 20:14:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:17.450 20:14:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:17.450 20:14:06 -- common/autotest_common.sh@10 -- # set +x 00:44:17.450 ************************************ 00:44:17.450 START TEST nvmf_abort_qd_sizes 00:44:17.450 ************************************ 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:17.450 * Looking for test storage... 00:44:17.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:17.450 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.450 --rc genhtml_branch_coverage=1 00:44:17.450 --rc genhtml_function_coverage=1 00:44:17.450 --rc genhtml_legend=1 00:44:17.450 --rc geninfo_all_blocks=1 00:44:17.450 --rc geninfo_unexecuted_blocks=1 00:44:17.450 00:44:17.450 ' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:17.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.451 --rc genhtml_branch_coverage=1 00:44:17.451 --rc genhtml_function_coverage=1 00:44:17.451 --rc genhtml_legend=1 00:44:17.451 --rc geninfo_all_blocks=1 00:44:17.451 --rc geninfo_unexecuted_blocks=1 00:44:17.451 00:44:17.451 ' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:17.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.451 --rc genhtml_branch_coverage=1 00:44:17.451 --rc genhtml_function_coverage=1 00:44:17.451 --rc genhtml_legend=1 00:44:17.451 --rc geninfo_all_blocks=1 00:44:17.451 --rc geninfo_unexecuted_blocks=1 00:44:17.451 00:44:17.451 ' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:17.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.451 --rc genhtml_branch_coverage=1 00:44:17.451 --rc genhtml_function_coverage=1 00:44:17.451 --rc genhtml_legend=1 00:44:17.451 --rc geninfo_all_blocks=1 00:44:17.451 --rc geninfo_unexecuted_blocks=1 00:44:17.451 00:44:17.451 ' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:17.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:17.451 20:14:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:19.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:19.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:19.351 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:19.351 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:19.351 20:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:19.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:19.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:44:19.351 00:44:19.351 --- 10.0.0.2 ping statistics --- 00:44:19.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:19.351 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:19.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:19.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:44:19.351 00:44:19.351 --- 10.0.0.1 ping statistics --- 00:44:19.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:19.351 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:44:19.351 20:14:09 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:20.725 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:20.725 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:20.725 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:20.725 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:20.725 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:20.725 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:20.725 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:20.725 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:20.725 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:20.725 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:20.725 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:20.725 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:20.725 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:20.725 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:20.725 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:20.725 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:21.661 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=3237989 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 3237989 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3237989 ']' 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:21.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:21.661 20:14:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:21.919 [2024-10-13 20:14:11.550588] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:44:21.919 [2024-10-13 20:14:11.550744] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:21.919 [2024-10-13 20:14:11.686356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:22.177 [2024-10-13 20:14:11.827064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:22.177 [2024-10-13 20:14:11.827141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:22.177 [2024-10-13 20:14:11.827166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:22.177 [2024-10-13 20:14:11.827190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:22.177 [2024-10-13 20:14:11.827209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:22.177 [2024-10-13 20:14:11.830080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:22.177 [2024-10-13 20:14:11.830150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:22.177 [2024-10-13 20:14:11.830237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:22.177 [2024-10-13 20:14:11.830243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:22.743 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:22.744 20:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:22.744 ************************************ 00:44:22.744 START TEST spdk_target_abort 00:44:22.744 ************************************ 00:44:22.744 20:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:44:22.744 20:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:22.744 20:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:22.744 20:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:22.744 20:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:26.084 spdk_targetn1 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:26.084 [2024-10-13 20:14:15.446400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:26.084 [2024-10-13 20:14:15.492865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:26.084 20:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:29.369 Initializing NVMe Controllers 00:44:29.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:29.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:29.369 Initialization complete. Launching workers. 00:44:29.370 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9294, failed: 0 00:44:29.370 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 8078 00:44:29.370 success 735, unsuccessful 481, failed 0 00:44:29.370 20:14:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:29.370 20:14:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:32.656 Initializing NVMe Controllers 00:44:32.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:32.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:32.656 Initialization complete. Launching workers. 00:44:32.656 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8394, failed: 0 00:44:32.656 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 7131 00:44:32.656 success 312, unsuccessful 951, failed 0 00:44:32.656 20:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:32.656 20:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:35.947 Initializing NVMe Controllers 00:44:35.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:35.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:35.947 Initialization complete. Launching workers. 00:44:35.947 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26660, failed: 0 00:44:35.947 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2813, failed to submit 23847 00:44:35.947 success 151, unsuccessful 2662, failed 0 00:44:35.947 20:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:35.947 20:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.947 20:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:35.947 20:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:35.947 20:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:35.947 20:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.947 20:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3237989 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3237989 ']' 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3237989 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3237989 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3237989' 00:44:37.322 killing process with pid 3237989 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3237989 00:44:37.322 20:14:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3237989 00:44:38.267 00:44:38.267 real 0m15.216s 00:44:38.267 user 0m59.184s 00:44:38.267 sys 0m2.940s 00:44:38.267 20:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:38.267 20:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:38.267 ************************************ 00:44:38.267 END TEST spdk_target_abort 00:44:38.267 ************************************ 00:44:38.267 20:14:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:38.267 20:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:38.267 20:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:38.267 20:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:38.267 ************************************ 00:44:38.267 START TEST kernel_target_abort 00:44:38.267 ************************************ 00:44:38.267 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:44:38.267 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:38.267 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:44:38.267 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:44:38.267 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:44:38.267 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:38.268 20:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:39.204 Waiting for block devices as requested 00:44:39.204 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:39.462 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:39.462 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:39.462 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:39.462 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:39.720 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:39.720 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:39.720 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:39.720 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:39.978 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:39.978 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:39.978 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:39.978 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:40.238 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:40.238 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:40.238 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:40.238 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:40.804 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:44:40.804 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:40.804 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:40.805 No valid GPT data, bailing 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:44:40.805 00:44:40.805 Discovery Log Number of Records 2, Generation counter 2 00:44:40.805 =====Discovery Log Entry 0====== 00:44:40.805 trtype: tcp 00:44:40.805 adrfam: ipv4 00:44:40.805 subtype: current discovery subsystem 00:44:40.805 treq: not specified, sq flow control disable supported 00:44:40.805 portid: 1 00:44:40.805 trsvcid: 4420 00:44:40.805 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:40.805 traddr: 10.0.0.1 00:44:40.805 eflags: none 00:44:40.805 sectype: none 00:44:40.805 =====Discovery Log Entry 1====== 00:44:40.805 trtype: tcp 00:44:40.805 adrfam: ipv4 00:44:40.805 subtype: nvme subsystem 00:44:40.805 treq: not specified, sq flow control disable supported 00:44:40.805 portid: 1 00:44:40.805 trsvcid: 4420 00:44:40.805 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:40.805 traddr: 10.0.0.1 00:44:40.805 eflags: none 00:44:40.805 sectype: none 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:40.805 20:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:44.085 Initializing NVMe Controllers 00:44:44.085 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:44.085 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:44.085 Initialization complete. Launching workers. 00:44:44.085 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40430, failed: 0 00:44:44.085 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40430, failed to submit 0 00:44:44.085 success 0, unsuccessful 40430, failed 0 00:44:44.085 20:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:44.085 20:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:47.362 Initializing NVMe Controllers 00:44:47.362 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:47.362 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:47.362 Initialization complete. Launching workers. 00:44:47.362 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66583, failed: 0 00:44:47.362 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16802, failed to submit 49781 00:44:47.362 success 0, unsuccessful 16802, failed 0 00:44:47.362 20:14:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:47.363 20:14:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:50.642 Initializing NVMe Controllers 00:44:50.642 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:50.642 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:50.642 Initialization complete. Launching workers. 00:44:50.642 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70944, failed: 0 00:44:50.642 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17726, failed to submit 53218 00:44:50.642 success 0, unsuccessful 17726, failed 0 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:44:50.642 20:14:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:51.575 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:51.575 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:51.575 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:51.575 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:51.575 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:51.575 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:51.575 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:51.575 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:51.575 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:51.575 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:51.575 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:51.575 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:51.575 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:51.833 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:51.833 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:51.833 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:52.768 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:52.768 00:44:52.768 real 0m14.656s 00:44:52.768 user 0m7.241s 00:44:52.768 sys 0m3.320s 00:44:52.768 20:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:52.768 20:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:52.768 ************************************ 00:44:52.768 END TEST kernel_target_abort 00:44:52.768 ************************************ 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:52.768 rmmod nvme_tcp 00:44:52.768 rmmod nvme_fabrics 00:44:52.768 rmmod nvme_keyring 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 3237989 ']' 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 3237989 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3237989 ']' 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3237989 00:44:52.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3237989) - No such process 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3237989 is not found' 00:44:52.768 Process with pid 3237989 is not found 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:44:52.768 20:14:42 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:54.143 Waiting for block devices as requested 00:44:54.143 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:54.143 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:54.143 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:54.401 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:54.401 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:54.401 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:54.401 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:54.659 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:54.659 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:54.659 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:54.659 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:54.917 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:54.917 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:54.917 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:55.175 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:55.175 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:55.175 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:55.434 20:14:44 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:55.434 20:14:44 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:55.434 20:14:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:55.434 20:14:44 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:44:55.434 20:14:44 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:55.434 20:14:44 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:44:55.434 20:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:55.434 20:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:55.434 20:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:55.434 20:14:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:55.434 20:14:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:57.339 20:14:47 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:57.339 00:44:57.339 real 0m40.246s 00:44:57.339 user 1m8.884s 00:44:57.339 sys 0m9.832s 00:44:57.339 20:14:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:57.339 20:14:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:57.339 ************************************ 00:44:57.339 END TEST nvmf_abort_qd_sizes 00:44:57.339 ************************************ 00:44:57.339 20:14:47 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:57.339 20:14:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:57.339 20:14:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:57.339 20:14:47 -- common/autotest_common.sh@10 -- # set +x 00:44:57.339 ************************************ 00:44:57.339 START TEST keyring_file 00:44:57.339 ************************************ 00:44:57.339 20:14:47 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:57.339 * Looking for test storage... 00:44:57.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:57.339 20:14:47 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:57.339 20:14:47 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:44:57.339 20:14:47 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:57.598 20:14:47 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:57.598 20:14:47 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:57.598 20:14:47 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:57.598 20:14:47 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:57.599 20:14:47 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:57.599 20:14:47 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:57.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:57.599 --rc genhtml_branch_coverage=1 00:44:57.599 --rc genhtml_function_coverage=1 00:44:57.599 --rc genhtml_legend=1 00:44:57.599 --rc geninfo_all_blocks=1 00:44:57.599 --rc geninfo_unexecuted_blocks=1 00:44:57.599 00:44:57.599 ' 00:44:57.599 20:14:47 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:57.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:57.599 --rc genhtml_branch_coverage=1 00:44:57.599 --rc genhtml_function_coverage=1 00:44:57.599 --rc genhtml_legend=1 00:44:57.599 --rc geninfo_all_blocks=1 00:44:57.599 --rc geninfo_unexecuted_blocks=1 00:44:57.599 00:44:57.599 ' 00:44:57.599 20:14:47 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:57.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:57.599 --rc genhtml_branch_coverage=1 00:44:57.599 --rc genhtml_function_coverage=1 00:44:57.599 --rc genhtml_legend=1 00:44:57.599 --rc geninfo_all_blocks=1 00:44:57.599 --rc geninfo_unexecuted_blocks=1 00:44:57.599 00:44:57.599 ' 00:44:57.599 20:14:47 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:57.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:57.599 --rc genhtml_branch_coverage=1 00:44:57.599 --rc genhtml_function_coverage=1 00:44:57.599 --rc genhtml_legend=1 00:44:57.599 --rc geninfo_all_blocks=1 00:44:57.599 --rc geninfo_unexecuted_blocks=1 00:44:57.599 00:44:57.599 ' 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:57.599 20:14:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:57.599 20:14:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:57.599 20:14:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:57.599 20:14:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:57.599 20:14:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:57.599 20:14:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:57.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ysfuF51zwK 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@731 -- # python - 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ysfuF51zwK 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ysfuF51zwK 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ysfuF51zwK 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BLYp7g0Hci 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:44:57.599 20:14:47 keyring_file -- nvmf/common.sh@731 -- # python - 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BLYp7g0Hci 00:44:57.599 20:14:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BLYp7g0Hci 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BLYp7g0Hci 00:44:57.599 20:14:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=3244215 00:44:57.600 20:14:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:57.600 20:14:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3244215 00:44:57.600 20:14:47 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3244215 ']' 00:44:57.600 20:14:47 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:57.600 20:14:47 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:57.600 20:14:47 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:57.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:57.600 20:14:47 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:57.600 20:14:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:57.858 [2024-10-13 20:14:47.438277] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:44:57.858 [2024-10-13 20:14:47.438463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244215 ] 00:44:57.858 [2024-10-13 20:14:47.564961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:58.116 [2024-10-13 20:14:47.696534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:44:59.049 20:14:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:59.049 [2024-10-13 20:14:48.662239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:59.049 null0 00:44:59.049 [2024-10-13 20:14:48.694278] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:59.049 [2024-10-13 20:14:48.694872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:59.049 20:14:48 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:59.049 [2024-10-13 20:14:48.718311] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:59.049 request: 00:44:59.049 { 00:44:59.049 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:59.049 "secure_channel": false, 00:44:59.049 "listen_address": { 00:44:59.049 "trtype": "tcp", 00:44:59.049 "traddr": "127.0.0.1", 00:44:59.049 "trsvcid": "4420" 00:44:59.049 }, 00:44:59.049 "method": "nvmf_subsystem_add_listener", 00:44:59.049 "req_id": 1 00:44:59.049 } 00:44:59.049 Got JSON-RPC error response 00:44:59.049 response: 00:44:59.049 { 00:44:59.049 "code": -32602, 00:44:59.049 "message": "Invalid parameters" 00:44:59.049 } 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:59.049 20:14:48 keyring_file -- keyring/file.sh@47 -- # bperfpid=3244356 00:44:59.049 20:14:48 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:59.049 20:14:48 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3244356 /var/tmp/bperf.sock 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3244356 ']' 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:59.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:59.049 20:14:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:59.049 [2024-10-13 20:14:48.805118] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:44:59.049 [2024-10-13 20:14:48.805263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244356 ] 00:44:59.308 [2024-10-13 20:14:48.939503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:59.308 [2024-10-13 20:14:49.074450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:00.241 20:14:49 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:00.241 20:14:49 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:00.241 20:14:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ysfuF51zwK 00:45:00.241 20:14:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ysfuF51zwK 00:45:00.241 20:14:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BLYp7g0Hci 00:45:00.241 20:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BLYp7g0Hci 00:45:00.499 20:14:50 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:00.499 20:14:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:00.499 20:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:00.499 20:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:00.499 20:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:01.065 20:14:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ysfuF51zwK == \/\t\m\p\/\t\m\p\.\y\s\f\u\F\5\1\z\w\K ]] 00:45:01.065 20:14:50 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:01.065 20:14:50 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:01.065 20:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:01.065 20:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:01.065 20:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:01.065 20:14:50 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BLYp7g0Hci == \/\t\m\p\/\t\m\p\.\B\L\Y\p\7\g\0\H\c\i ]] 00:45:01.322 20:14:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:01.322 20:14:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:01.322 20:14:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:01.322 20:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:01.322 20:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:01.322 20:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:01.580 20:14:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:01.580 20:14:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:01.580 20:14:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:01.580 20:14:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:01.580 20:14:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:01.580 20:14:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:01.580 20:14:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:01.838 20:14:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:01.838 20:14:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:01.838 20:14:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:02.096 [2024-10-13 20:14:51.692781] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:02.096 nvme0n1 00:45:02.096 20:14:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:02.096 20:14:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:02.096 20:14:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:02.096 20:14:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:02.096 20:14:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:02.096 20:14:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:02.353 20:14:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:02.353 20:14:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:02.353 20:14:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:02.353 20:14:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:02.353 20:14:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:02.353 20:14:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:02.354 20:14:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:02.611 20:14:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:02.611 20:14:52 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:02.869 Running I/O for 1 seconds... 00:45:03.809 6600.00 IOPS, 25.78 MiB/s 00:45:03.809 Latency(us) 00:45:03.809 [2024-10-13T18:14:53.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:03.809 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:03.809 nvme0n1 : 1.01 6652.38 25.99 0.00 0.00 19150.65 8932.31 30874.74 00:45:03.809 [2024-10-13T18:14:53.624Z] =================================================================================================================== 00:45:03.809 [2024-10-13T18:14:53.624Z] Total : 6652.38 25.99 0.00 0.00 19150.65 8932.31 30874.74 00:45:03.809 { 00:45:03.809 "results": [ 00:45:03.809 { 00:45:03.809 "job": "nvme0n1", 00:45:03.809 "core_mask": "0x2", 00:45:03.809 "workload": "randrw", 00:45:03.809 "percentage": 50, 00:45:03.809 "status": "finished", 00:45:03.809 "queue_depth": 128, 00:45:03.809 "io_size": 4096, 00:45:03.809 "runtime": 1.011368, 00:45:03.809 "iops": 6652.375791996583, 00:45:03.809 "mibps": 25.98584293748665, 00:45:03.809 "io_failed": 0, 00:45:03.809 "io_timeout": 0, 00:45:03.809 "avg_latency_us": 19150.64623948562, 00:45:03.809 "min_latency_us": 8932.314074074075, 00:45:03.809 "max_latency_us": 30874.737777777777 00:45:03.809 } 00:45:03.809 ], 00:45:03.809 "core_count": 1 00:45:03.809 } 00:45:03.809 20:14:53 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:03.809 20:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:04.134 20:14:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:04.134 20:14:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:04.134 20:14:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:04.134 20:14:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:04.134 20:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:04.134 20:14:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:04.420 20:14:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:04.420 20:14:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:04.420 20:14:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:04.420 20:14:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:04.420 20:14:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:04.420 20:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:04.420 20:14:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:04.678 20:14:54 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:04.678 20:14:54 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:04.678 20:14:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:04.678 20:14:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:04.678 20:14:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:04.678 20:14:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:04.678 20:14:54 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:04.678 20:14:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:04.678 20:14:54 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:04.678 20:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:04.936 [2024-10-13 20:14:54.600578] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:04.936 [2024-10-13 20:14:54.601195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:04.936 [2024-10-13 20:14:54.602167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:04.936 [2024-10-13 20:14:54.603160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:04.936 [2024-10-13 20:14:54.603196] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:04.936 [2024-10-13 20:14:54.603221] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:04.936 [2024-10-13 20:14:54.603246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:04.936 request: 00:45:04.936 { 00:45:04.936 "name": "nvme0", 00:45:04.936 "trtype": "tcp", 00:45:04.936 "traddr": "127.0.0.1", 00:45:04.936 "adrfam": "ipv4", 00:45:04.936 "trsvcid": "4420", 00:45:04.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:04.936 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:04.936 "prchk_reftag": false, 00:45:04.936 "prchk_guard": false, 00:45:04.936 "hdgst": false, 00:45:04.936 "ddgst": false, 00:45:04.936 "psk": "key1", 00:45:04.936 "allow_unrecognized_csi": false, 00:45:04.936 "method": "bdev_nvme_attach_controller", 00:45:04.936 "req_id": 1 00:45:04.936 } 00:45:04.936 Got JSON-RPC error response 00:45:04.936 response: 00:45:04.936 { 00:45:04.936 "code": -5, 00:45:04.936 "message": "Input/output error" 00:45:04.936 } 00:45:04.936 20:14:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:04.936 20:14:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:04.936 20:14:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:04.936 20:14:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:04.936 20:14:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:04.936 20:14:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:04.936 20:14:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:04.936 20:14:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:04.936 20:14:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:04.936 20:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:05.194 20:14:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:05.194 20:14:54 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:05.194 20:14:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:05.194 20:14:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:05.194 20:14:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:05.194 20:14:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:05.194 20:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:05.452 20:14:55 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:05.452 20:14:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:05.452 20:14:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:05.709 20:14:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:05.709 20:14:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:05.967 20:14:55 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:05.967 20:14:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:05.967 20:14:55 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:06.225 20:14:55 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:06.226 20:14:55 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ysfuF51zwK 00:45:06.226 20:14:55 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ysfuF51zwK 00:45:06.226 20:14:55 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:06.226 20:14:55 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ysfuF51zwK 00:45:06.226 20:14:55 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:06.226 20:14:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:06.226 20:14:55 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:06.226 20:14:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:06.226 20:14:55 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ysfuF51zwK 00:45:06.226 20:14:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ysfuF51zwK 00:45:06.483 [2024-10-13 20:14:56.258270] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ysfuF51zwK': 0100660 00:45:06.483 [2024-10-13 20:14:56.258325] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:06.483 request: 00:45:06.483 { 00:45:06.483 "name": "key0", 00:45:06.483 "path": "/tmp/tmp.ysfuF51zwK", 00:45:06.483 "method": "keyring_file_add_key", 00:45:06.483 "req_id": 1 00:45:06.483 } 00:45:06.483 Got JSON-RPC error response 00:45:06.483 response: 00:45:06.483 { 00:45:06.483 "code": -1, 00:45:06.484 "message": "Operation not permitted" 00:45:06.484 } 00:45:06.484 20:14:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:06.484 20:14:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:06.484 20:14:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:06.484 20:14:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:06.484 20:14:56 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ysfuF51zwK 00:45:06.484 20:14:56 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ysfuF51zwK 00:45:06.484 20:14:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ysfuF51zwK 00:45:07.049 20:14:56 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ysfuF51zwK 00:45:07.049 20:14:56 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:07.049 20:14:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:07.049 20:14:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:07.049 20:14:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:07.049 20:14:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:07.049 20:14:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:07.049 20:14:56 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:07.049 20:14:56 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:07.049 20:14:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:07.049 20:14:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:07.049 20:14:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:07.049 20:14:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:07.049 20:14:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:07.049 20:14:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:07.049 20:14:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:07.049 20:14:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:07.307 [2024-10-13 20:14:57.084605] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ysfuF51zwK': No such file or directory 00:45:07.307 [2024-10-13 20:14:57.084664] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:07.307 [2024-10-13 20:14:57.084704] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:07.307 [2024-10-13 20:14:57.084739] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:07.307 [2024-10-13 20:14:57.084759] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:07.307 [2024-10-13 20:14:57.084777] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:07.307 request: 00:45:07.307 { 00:45:07.307 "name": "nvme0", 00:45:07.307 "trtype": "tcp", 00:45:07.307 "traddr": "127.0.0.1", 00:45:07.307 "adrfam": "ipv4", 00:45:07.307 "trsvcid": "4420", 00:45:07.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:07.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:07.307 "prchk_reftag": false, 00:45:07.307 "prchk_guard": false, 00:45:07.307 "hdgst": false, 00:45:07.307 "ddgst": false, 00:45:07.307 "psk": "key0", 00:45:07.307 "allow_unrecognized_csi": false, 00:45:07.307 "method": "bdev_nvme_attach_controller", 00:45:07.307 "req_id": 1 00:45:07.307 } 00:45:07.307 Got JSON-RPC error response 00:45:07.307 response: 00:45:07.307 { 00:45:07.307 "code": -19, 00:45:07.307 "message": "No such device" 00:45:07.307 } 00:45:07.307 20:14:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:07.307 20:14:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:07.307 20:14:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:07.307 20:14:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:07.307 20:14:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:07.307 20:14:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:07.873 20:14:57 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2NX2FpepBP 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:07.873 20:14:57 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:07.873 20:14:57 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:45:07.873 20:14:57 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:45:07.873 20:14:57 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:45:07.873 20:14:57 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:45:07.873 20:14:57 keyring_file -- nvmf/common.sh@731 -- # python - 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2NX2FpepBP 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2NX2FpepBP 00:45:07.873 20:14:57 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.2NX2FpepBP 00:45:07.873 20:14:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2NX2FpepBP 00:45:07.873 20:14:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2NX2FpepBP 00:45:08.131 20:14:57 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:08.131 20:14:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:08.388 nvme0n1 00:45:08.388 20:14:58 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:08.388 20:14:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:08.388 20:14:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:08.388 20:14:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:08.388 20:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:08.388 20:14:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:08.646 20:14:58 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:08.646 20:14:58 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:08.646 20:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:08.904 20:14:58 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:08.904 20:14:58 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:08.904 20:14:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:08.904 20:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:08.904 20:14:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:09.162 20:14:58 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:09.162 20:14:58 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:09.162 20:14:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:09.162 20:14:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:09.162 20:14:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:09.162 20:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:09.162 20:14:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:09.420 20:14:59 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:09.420 20:14:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:09.420 20:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:09.677 20:14:59 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:09.677 20:14:59 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:09.677 20:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:09.934 20:14:59 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:09.935 20:14:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2NX2FpepBP 00:45:09.935 20:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2NX2FpepBP 00:45:10.192 20:14:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BLYp7g0Hci 00:45:10.192 20:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BLYp7g0Hci 00:45:10.758 20:15:00 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:10.758 20:15:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:11.016 nvme0n1 00:45:11.016 20:15:00 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:11.016 20:15:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:11.274 20:15:00 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:11.274 "subsystems": [ 00:45:11.274 { 00:45:11.274 "subsystem": "keyring", 00:45:11.274 "config": [ 00:45:11.274 { 00:45:11.274 "method": "keyring_file_add_key", 00:45:11.274 "params": { 00:45:11.274 "name": "key0", 00:45:11.274 "path": "/tmp/tmp.2NX2FpepBP" 00:45:11.274 } 00:45:11.274 }, 00:45:11.274 { 00:45:11.274 "method": "keyring_file_add_key", 00:45:11.274 "params": { 00:45:11.274 "name": "key1", 00:45:11.274 "path": "/tmp/tmp.BLYp7g0Hci" 00:45:11.274 } 00:45:11.274 } 00:45:11.274 ] 00:45:11.274 }, 00:45:11.274 { 00:45:11.274 "subsystem": "iobuf", 00:45:11.274 "config": [ 00:45:11.274 { 00:45:11.274 "method": "iobuf_set_options", 00:45:11.274 "params": { 00:45:11.274 "small_pool_count": 8192, 00:45:11.274 "large_pool_count": 1024, 00:45:11.274 "small_bufsize": 8192, 00:45:11.274 "large_bufsize": 135168 00:45:11.274 } 00:45:11.274 } 00:45:11.274 ] 00:45:11.274 }, 00:45:11.274 { 00:45:11.274 "subsystem": "sock", 00:45:11.274 "config": [ 00:45:11.274 { 00:45:11.274 "method": "sock_set_default_impl", 00:45:11.274 "params": { 00:45:11.274 "impl_name": "posix" 00:45:11.274 } 00:45:11.274 }, 00:45:11.274 { 00:45:11.274 "method": "sock_impl_set_options", 00:45:11.274 "params": { 00:45:11.274 "impl_name": "ssl", 00:45:11.274 "recv_buf_size": 4096, 00:45:11.274 "send_buf_size": 4096, 00:45:11.274 "enable_recv_pipe": true, 00:45:11.274 "enable_quickack": false, 00:45:11.274 "enable_placement_id": 0, 00:45:11.274 "enable_zerocopy_send_server": true, 00:45:11.274 "enable_zerocopy_send_client": false, 00:45:11.274 "zerocopy_threshold": 0, 00:45:11.274 "tls_version": 0, 00:45:11.274 "enable_ktls": false 00:45:11.274 } 00:45:11.274 }, 00:45:11.274 { 00:45:11.274 "method": "sock_impl_set_options", 00:45:11.274 "params": { 00:45:11.274 "impl_name": "posix", 00:45:11.274 "recv_buf_size": 2097152, 00:45:11.275 "send_buf_size": 2097152, 00:45:11.275 "enable_recv_pipe": true, 00:45:11.275 "enable_quickack": false, 00:45:11.275 "enable_placement_id": 0, 00:45:11.275 "enable_zerocopy_send_server": true, 00:45:11.275 "enable_zerocopy_send_client": false, 00:45:11.275 "zerocopy_threshold": 0, 00:45:11.275 "tls_version": 0, 00:45:11.275 "enable_ktls": false 00:45:11.275 } 00:45:11.275 } 00:45:11.275 ] 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "subsystem": "vmd", 00:45:11.275 "config": [] 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "subsystem": "accel", 00:45:11.275 "config": [ 00:45:11.275 { 00:45:11.275 "method": "accel_set_options", 00:45:11.275 "params": { 00:45:11.275 "small_cache_size": 128, 00:45:11.275 "large_cache_size": 16, 00:45:11.275 "task_count": 2048, 00:45:11.275 "sequence_count": 2048, 00:45:11.275 "buf_count": 2048 00:45:11.275 } 00:45:11.275 } 00:45:11.275 ] 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "subsystem": "bdev", 00:45:11.275 "config": [ 00:45:11.275 { 00:45:11.275 "method": "bdev_set_options", 00:45:11.275 "params": { 00:45:11.275 "bdev_io_pool_size": 65535, 00:45:11.275 "bdev_io_cache_size": 256, 00:45:11.275 "bdev_auto_examine": true, 00:45:11.275 "iobuf_small_cache_size": 128, 00:45:11.275 "iobuf_large_cache_size": 16 00:45:11.275 } 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "method": "bdev_raid_set_options", 00:45:11.275 "params": { 00:45:11.275 "process_window_size_kb": 1024, 00:45:11.275 "process_max_bandwidth_mb_sec": 0 00:45:11.275 } 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "method": "bdev_iscsi_set_options", 00:45:11.275 "params": { 00:45:11.275 "timeout_sec": 30 00:45:11.275 } 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "method": "bdev_nvme_set_options", 00:45:11.275 "params": { 00:45:11.275 "action_on_timeout": "none", 00:45:11.275 "timeout_us": 0, 00:45:11.275 "timeout_admin_us": 0, 00:45:11.275 "keep_alive_timeout_ms": 10000, 00:45:11.275 "arbitration_burst": 0, 00:45:11.275 "low_priority_weight": 0, 00:45:11.275 "medium_priority_weight": 0, 00:45:11.275 "high_priority_weight": 0, 00:45:11.275 "nvme_adminq_poll_period_us": 10000, 00:45:11.275 "nvme_ioq_poll_period_us": 0, 00:45:11.275 "io_queue_requests": 512, 00:45:11.275 "delay_cmd_submit": true, 00:45:11.275 "transport_retry_count": 4, 00:45:11.275 "bdev_retry_count": 3, 00:45:11.275 "transport_ack_timeout": 0, 00:45:11.275 "ctrlr_loss_timeout_sec": 0, 00:45:11.275 "reconnect_delay_sec": 0, 00:45:11.275 "fast_io_fail_timeout_sec": 0, 00:45:11.275 "disable_auto_failback": false, 00:45:11.275 "generate_uuids": false, 00:45:11.275 "transport_tos": 0, 00:45:11.275 "nvme_error_stat": false, 00:45:11.275 "rdma_srq_size": 0, 00:45:11.275 "io_path_stat": false, 00:45:11.275 "allow_accel_sequence": false, 00:45:11.275 "rdma_max_cq_size": 0, 00:45:11.275 "rdma_cm_event_timeout_ms": 0, 00:45:11.275 "dhchap_digests": [ 00:45:11.275 "sha256", 00:45:11.275 "sha384", 00:45:11.275 "sha512" 00:45:11.275 ], 00:45:11.275 "dhchap_dhgroups": [ 00:45:11.275 "null", 00:45:11.275 "ffdhe2048", 00:45:11.275 "ffdhe3072", 00:45:11.275 "ffdhe4096", 00:45:11.275 "ffdhe6144", 00:45:11.275 "ffdhe8192" 00:45:11.275 ] 00:45:11.275 } 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "method": "bdev_nvme_attach_controller", 00:45:11.275 "params": { 00:45:11.275 "name": "nvme0", 00:45:11.275 "trtype": "TCP", 00:45:11.275 "adrfam": "IPv4", 00:45:11.275 "traddr": "127.0.0.1", 00:45:11.275 "trsvcid": "4420", 00:45:11.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:11.275 "prchk_reftag": false, 00:45:11.275 "prchk_guard": false, 00:45:11.275 "ctrlr_loss_timeout_sec": 0, 00:45:11.275 "reconnect_delay_sec": 0, 00:45:11.275 "fast_io_fail_timeout_sec": 0, 00:45:11.275 "psk": "key0", 00:45:11.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:11.275 "hdgst": false, 00:45:11.275 "ddgst": false, 00:45:11.275 "multipath": "multipath" 00:45:11.275 } 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "method": "bdev_nvme_set_hotplug", 00:45:11.275 "params": { 00:45:11.275 "period_us": 100000, 00:45:11.275 "enable": false 00:45:11.275 } 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "method": "bdev_wait_for_examine" 00:45:11.275 } 00:45:11.275 ] 00:45:11.275 }, 00:45:11.275 { 00:45:11.275 "subsystem": "nbd", 00:45:11.275 "config": [] 00:45:11.275 } 00:45:11.275 ] 00:45:11.275 }' 00:45:11.275 20:15:00 keyring_file -- keyring/file.sh@115 -- # killprocess 3244356 00:45:11.275 20:15:00 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3244356 ']' 00:45:11.275 20:15:00 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3244356 00:45:11.275 20:15:00 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:11.275 20:15:00 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:11.275 20:15:00 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3244356 00:45:11.275 20:15:01 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:11.275 20:15:01 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:11.275 20:15:01 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3244356' 00:45:11.275 killing process with pid 3244356 00:45:11.275 20:15:01 keyring_file -- common/autotest_common.sh@969 -- # kill 3244356 00:45:11.275 Received shutdown signal, test time was about 1.000000 seconds 00:45:11.275 00:45:11.275 Latency(us) 00:45:11.275 [2024-10-13T18:15:01.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:11.275 [2024-10-13T18:15:01.090Z] =================================================================================================================== 00:45:11.275 [2024-10-13T18:15:01.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:11.275 20:15:01 keyring_file -- common/autotest_common.sh@974 -- # wait 3244356 00:45:12.208 20:15:01 keyring_file -- keyring/file.sh@118 -- # bperfpid=3246176 00:45:12.208 20:15:01 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3246176 /var/tmp/bperf.sock 00:45:12.208 20:15:01 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3246176 ']' 00:45:12.208 20:15:01 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:12.208 20:15:01 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:12.208 20:15:01 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:12.208 20:15:01 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:12.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:12.208 20:15:01 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:12.208 "subsystems": [ 00:45:12.208 { 00:45:12.208 "subsystem": "keyring", 00:45:12.208 "config": [ 00:45:12.208 { 00:45:12.208 "method": "keyring_file_add_key", 00:45:12.208 "params": { 00:45:12.208 "name": "key0", 00:45:12.208 "path": "/tmp/tmp.2NX2FpepBP" 00:45:12.208 } 00:45:12.208 }, 00:45:12.208 { 00:45:12.208 "method": "keyring_file_add_key", 00:45:12.208 "params": { 00:45:12.208 "name": "key1", 00:45:12.208 "path": "/tmp/tmp.BLYp7g0Hci" 00:45:12.208 } 00:45:12.208 } 00:45:12.208 ] 00:45:12.208 }, 00:45:12.208 { 00:45:12.208 "subsystem": "iobuf", 00:45:12.208 "config": [ 00:45:12.208 { 00:45:12.208 "method": "iobuf_set_options", 00:45:12.208 "params": { 00:45:12.208 "small_pool_count": 8192, 00:45:12.208 "large_pool_count": 1024, 00:45:12.208 "small_bufsize": 8192, 00:45:12.208 "large_bufsize": 135168 00:45:12.208 } 00:45:12.208 } 00:45:12.208 ] 00:45:12.208 }, 00:45:12.208 { 00:45:12.208 "subsystem": "sock", 00:45:12.208 "config": [ 00:45:12.208 { 00:45:12.208 "method": "sock_set_default_impl", 00:45:12.208 "params": { 00:45:12.208 "impl_name": "posix" 00:45:12.208 } 00:45:12.208 }, 00:45:12.208 { 00:45:12.208 "method": "sock_impl_set_options", 00:45:12.208 "params": { 00:45:12.208 "impl_name": "ssl", 00:45:12.208 "recv_buf_size": 4096, 00:45:12.208 "send_buf_size": 4096, 00:45:12.208 "enable_recv_pipe": true, 00:45:12.208 "enable_quickack": false, 00:45:12.208 "enable_placement_id": 0, 00:45:12.208 "enable_zerocopy_send_server": true, 00:45:12.208 "enable_zerocopy_send_client": false, 00:45:12.208 "zerocopy_threshold": 0, 00:45:12.208 "tls_version": 0, 00:45:12.208 "enable_ktls": false 00:45:12.208 } 00:45:12.208 }, 00:45:12.208 { 00:45:12.208 "method": "sock_impl_set_options", 00:45:12.208 "params": { 00:45:12.208 "impl_name": "posix", 00:45:12.208 "recv_buf_size": 2097152, 00:45:12.208 "send_buf_size": 2097152, 00:45:12.208 "enable_recv_pipe": true, 00:45:12.208 "enable_quickack": false, 00:45:12.208 "enable_placement_id": 0, 00:45:12.208 "enable_zerocopy_send_server": true, 00:45:12.208 "enable_zerocopy_send_client": false, 00:45:12.208 "zerocopy_threshold": 0, 00:45:12.208 "tls_version": 0, 00:45:12.208 "enable_ktls": false 00:45:12.208 } 00:45:12.208 } 00:45:12.208 ] 00:45:12.208 }, 00:45:12.208 { 00:45:12.208 "subsystem": "vmd", 00:45:12.208 "config": [] 00:45:12.208 }, 00:45:12.208 { 00:45:12.208 "subsystem": "accel", 00:45:12.208 "config": [ 00:45:12.208 { 00:45:12.208 "method": "accel_set_options", 00:45:12.208 "params": { 00:45:12.208 "small_cache_size": 128, 00:45:12.208 "large_cache_size": 16, 00:45:12.208 "task_count": 2048, 00:45:12.208 "sequence_count": 2048, 00:45:12.208 "buf_count": 2048 00:45:12.208 } 00:45:12.208 } 00:45:12.208 ] 00:45:12.208 }, 00:45:12.208 { 00:45:12.208 "subsystem": "bdev", 00:45:12.208 "config": [ 00:45:12.208 { 00:45:12.208 "method": "bdev_set_options", 00:45:12.208 "params": { 00:45:12.208 "bdev_io_pool_size": 65535, 00:45:12.209 "bdev_io_cache_size": 256, 00:45:12.209 "bdev_auto_examine": true, 00:45:12.209 "iobuf_small_cache_size": 128, 00:45:12.209 "iobuf_large_cache_size": 16 00:45:12.209 } 00:45:12.209 }, 00:45:12.209 { 00:45:12.209 "method": "bdev_raid_set_options", 00:45:12.209 "params": { 00:45:12.209 "process_window_size_kb": 1024, 00:45:12.209 "process_max_bandwidth_mb_sec": 0 00:45:12.209 } 00:45:12.209 }, 00:45:12.209 { 00:45:12.209 "method": "bdev_iscsi_set_options", 00:45:12.209 "params": { 00:45:12.209 "timeout_sec": 30 00:45:12.209 } 00:45:12.209 }, 00:45:12.209 { 00:45:12.209 "method": "bdev_nvme_set_options", 00:45:12.209 "params": { 00:45:12.209 "action_on_timeout": "none", 00:45:12.209 "timeout_us": 0, 00:45:12.209 "timeout_admin_us": 0, 00:45:12.209 "keep_alive_timeout_ms": 10000, 00:45:12.209 "arbitration_burst": 0, 00:45:12.209 "low_priority_weight": 0, 00:45:12.209 "medium_priority_weight": 0, 00:45:12.209 "high_priority_weight": 0, 00:45:12.209 "nvme_adminq_poll_period_us": 10000, 00:45:12.209 "nvme_ioq_poll_period_us": 0, 00:45:12.209 "io_queue_requests": 512, 00:45:12.209 "delay_cmd_submit": true, 00:45:12.209 "transport_retry_count": 4, 00:45:12.209 "bdev_retry_count": 3, 00:45:12.209 "transport_ack_timeout": 0, 00:45:12.209 "ctrlr_loss_timeout_sec": 0, 00:45:12.209 "reconnect_delay_sec": 0, 00:45:12.209 "fast_io_fail_timeout_sec": 0, 00:45:12.209 "disable_auto_failback": false, 00:45:12.209 "generate_uuids": false, 00:45:12.209 "transport_tos": 0, 00:45:12.209 "nvme_error_stat": false, 00:45:12.209 "rdma_srq_size": 0, 00:45:12.209 "io_path_stat": false, 00:45:12.209 "allow_accel_sequence": false, 00:45:12.209 "rdma_max_cq_size": 0, 00:45:12.209 "rdma_cm_event_timeout_ms": 0, 00:45:12.209 "dhchap_digests": [ 00:45:12.209 "sha256", 00:45:12.209 "sha384", 00:45:12.209 "sha512" 00:45:12.209 ], 00:45:12.209 "dhchap_dhgroups": [ 00:45:12.209 "null", 00:45:12.209 "ffdhe2048", 00:45:12.209 "ffdhe3072", 00:45:12.209 "ffdhe4096", 00:45:12.209 "ffdhe6144", 00:45:12.209 "ffdhe8192" 00:45:12.209 ] 00:45:12.209 } 00:45:12.209 }, 00:45:12.209 { 00:45:12.209 "method": "bdev_nvme_attach_controller", 00:45:12.209 "params": { 00:45:12.209 "name": "nvme0", 00:45:12.209 "trtype": "TCP", 00:45:12.209 "adrfam": "IPv4", 00:45:12.209 "traddr": "127.0.0.1", 00:45:12.209 "trsvcid": "4420", 00:45:12.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:12.209 "prchk_reftag": false, 00:45:12.209 "prchk_guard": false, 00:45:12.209 "ctrlr_loss_timeout_sec": 0, 00:45:12.209 "reconnect_delay_sec": 0, 00:45:12.209 "fast_io_fail_timeout_sec": 0, 00:45:12.209 "psk": "key0", 00:45:12.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:12.209 "hdgst": false, 00:45:12.209 "ddgst": false, 00:45:12.209 "multipath": "multipath" 00:45:12.209 } 00:45:12.209 }, 00:45:12.209 { 00:45:12.209 "method": "bdev_nvme_set_hotplug", 00:45:12.209 "params": { 00:45:12.209 "period_us": 100000, 00:45:12.209 "enable": false 00:45:12.209 } 00:45:12.209 }, 00:45:12.209 { 00:45:12.209 "method": "bdev_wait_for_examine" 00:45:12.209 } 00:45:12.209 ] 00:45:12.209 }, 00:45:12.209 { 00:45:12.209 "subsystem": "nbd", 00:45:12.209 "config": [] 00:45:12.209 } 00:45:12.209 ] 00:45:12.209 }' 00:45:12.209 20:15:01 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:12.209 20:15:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:12.209 [2024-10-13 20:15:01.993667] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:45:12.209 [2024-10-13 20:15:01.993818] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246176 ] 00:45:12.466 [2024-10-13 20:15:02.124802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:12.466 [2024-10-13 20:15:02.257420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:13.031 [2024-10-13 20:15:02.708351] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:13.290 20:15:02 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:13.290 20:15:02 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:13.290 20:15:02 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:13.290 20:15:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:13.290 20:15:02 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:13.547 20:15:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:13.547 20:15:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:13.547 20:15:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:13.547 20:15:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:13.547 20:15:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:13.547 20:15:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:13.547 20:15:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:13.805 20:15:03 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:13.805 20:15:03 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:13.805 20:15:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:13.805 20:15:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:13.805 20:15:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:13.805 20:15:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:13.805 20:15:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:14.063 20:15:03 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:14.063 20:15:03 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:14.063 20:15:03 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:14.063 20:15:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:14.321 20:15:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:14.321 20:15:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:14.321 20:15:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2NX2FpepBP /tmp/tmp.BLYp7g0Hci 00:45:14.321 20:15:04 keyring_file -- keyring/file.sh@20 -- # killprocess 3246176 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3246176 ']' 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3246176 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3246176 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3246176' 00:45:14.321 killing process with pid 3246176 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@969 -- # kill 3246176 00:45:14.321 Received shutdown signal, test time was about 1.000000 seconds 00:45:14.321 00:45:14.321 Latency(us) 00:45:14.321 [2024-10-13T18:15:04.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:14.321 [2024-10-13T18:15:04.136Z] =================================================================================================================== 00:45:14.321 [2024-10-13T18:15:04.136Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:14.321 20:15:04 keyring_file -- common/autotest_common.sh@974 -- # wait 3246176 00:45:15.255 20:15:05 keyring_file -- keyring/file.sh@21 -- # killprocess 3244215 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3244215 ']' 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3244215 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3244215 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3244215' 00:45:15.255 killing process with pid 3244215 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@969 -- # kill 3244215 00:45:15.255 20:15:05 keyring_file -- common/autotest_common.sh@974 -- # wait 3244215 00:45:17.783 00:45:17.783 real 0m20.379s 00:45:17.783 user 0m46.218s 00:45:17.783 sys 0m3.712s 00:45:17.783 20:15:07 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:17.783 20:15:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:17.783 ************************************ 00:45:17.783 END TEST keyring_file 00:45:17.783 ************************************ 00:45:17.783 20:15:07 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:45:17.783 20:15:07 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:17.783 20:15:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:17.783 20:15:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:17.783 20:15:07 -- common/autotest_common.sh@10 -- # set +x 00:45:17.783 ************************************ 00:45:17.783 START TEST keyring_linux 00:45:17.783 ************************************ 00:45:17.783 20:15:07 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:17.783 Joined session keyring: 863861349 00:45:17.783 * Looking for test storage... 00:45:17.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:17.783 20:15:07 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:17.783 20:15:07 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:45:17.783 20:15:07 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:18.042 20:15:07 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:18.042 20:15:07 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:18.042 20:15:07 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:18.042 20:15:07 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:18.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:18.042 --rc genhtml_branch_coverage=1 00:45:18.042 --rc genhtml_function_coverage=1 00:45:18.042 --rc genhtml_legend=1 00:45:18.042 --rc geninfo_all_blocks=1 00:45:18.042 --rc geninfo_unexecuted_blocks=1 00:45:18.042 00:45:18.042 ' 00:45:18.042 20:15:07 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:18.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:18.042 --rc genhtml_branch_coverage=1 00:45:18.043 --rc genhtml_function_coverage=1 00:45:18.043 --rc genhtml_legend=1 00:45:18.043 --rc geninfo_all_blocks=1 00:45:18.043 --rc geninfo_unexecuted_blocks=1 00:45:18.043 00:45:18.043 ' 00:45:18.043 20:15:07 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:18.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:18.043 --rc genhtml_branch_coverage=1 00:45:18.043 --rc genhtml_function_coverage=1 00:45:18.043 --rc genhtml_legend=1 00:45:18.043 --rc geninfo_all_blocks=1 00:45:18.043 --rc geninfo_unexecuted_blocks=1 00:45:18.043 00:45:18.043 ' 00:45:18.043 20:15:07 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:18.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:18.043 --rc genhtml_branch_coverage=1 00:45:18.043 --rc genhtml_function_coverage=1 00:45:18.043 --rc genhtml_legend=1 00:45:18.043 --rc geninfo_all_blocks=1 00:45:18.043 --rc geninfo_unexecuted_blocks=1 00:45:18.043 00:45:18.043 ' 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:18.043 20:15:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:18.043 20:15:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:18.043 20:15:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:18.043 20:15:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:18.043 20:15:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.043 20:15:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.043 20:15:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.043 20:15:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:18.043 20:15:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:18.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@731 -- # python - 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:18.043 /tmp/:spdk-test:key0 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:45:18.043 20:15:07 keyring_linux -- nvmf/common.sh@731 -- # python - 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:18.043 20:15:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:18.043 /tmp/:spdk-test:key1 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3247207 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:18.043 20:15:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3247207 00:45:18.043 20:15:07 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3247207 ']' 00:45:18.043 20:15:07 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:18.043 20:15:07 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:18.043 20:15:07 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:18.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:18.043 20:15:07 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:18.043 20:15:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:18.306 [2024-10-13 20:15:07.864577] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:45:18.306 [2024-10-13 20:15:07.864736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247207 ] 00:45:18.306 [2024-10-13 20:15:07.993995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:18.563 [2024-10-13 20:15:08.127614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:19.496 20:15:09 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:19.496 20:15:09 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:45:19.496 20:15:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:19.496 20:15:09 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.496 20:15:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:19.496 [2024-10-13 20:15:09.045205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:19.496 null0 00:45:19.496 [2024-10-13 20:15:09.077244] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:19.496 [2024-10-13 20:15:09.077917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:19.496 20:15:09 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.496 20:15:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:19.496 789548059 00:45:19.496 20:15:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:19.496 624802097 00:45:19.496 20:15:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3247604 00:45:19.496 20:15:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:19.497 20:15:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3247604 /var/tmp/bperf.sock 00:45:19.497 20:15:09 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3247604 ']' 00:45:19.497 20:15:09 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:19.497 20:15:09 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:19.497 20:15:09 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:19.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:19.497 20:15:09 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:19.497 20:15:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:19.497 [2024-10-13 20:15:09.185871] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:45:19.497 [2024-10-13 20:15:09.186004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247604 ] 00:45:19.755 [2024-10-13 20:15:09.319471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:19.755 [2024-10-13 20:15:09.454976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:20.688 20:15:10 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:20.688 20:15:10 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:45:20.688 20:15:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:20.688 20:15:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:20.688 20:15:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:20.688 20:15:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:21.253 20:15:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:21.253 20:15:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:21.510 [2024-10-13 20:15:11.318279] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:21.768 nvme0n1 00:45:21.768 20:15:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:21.768 20:15:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:21.768 20:15:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:21.768 20:15:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:21.768 20:15:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:21.768 20:15:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.025 20:15:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:22.025 20:15:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:22.025 20:15:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:22.025 20:15:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:22.025 20:15:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:22.025 20:15:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.025 20:15:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:22.283 20:15:11 keyring_linux -- keyring/linux.sh@25 -- # sn=789548059 00:45:22.283 20:15:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:22.283 20:15:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:22.283 20:15:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 789548059 == \7\8\9\5\4\8\0\5\9 ]] 00:45:22.283 20:15:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 789548059 00:45:22.283 20:15:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:22.283 20:15:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:22.283 Running I/O for 1 seconds... 00:45:23.656 6868.00 IOPS, 26.83 MiB/s 00:45:23.656 Latency(us) 00:45:23.656 [2024-10-13T18:15:13.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:23.656 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:23.656 nvme0n1 : 1.02 6886.51 26.90 0.00 0.00 18426.79 10922.67 30292.20 00:45:23.656 [2024-10-13T18:15:13.471Z] =================================================================================================================== 00:45:23.656 [2024-10-13T18:15:13.471Z] Total : 6886.51 26.90 0.00 0.00 18426.79 10922.67 30292.20 00:45:23.656 { 00:45:23.656 "results": [ 00:45:23.656 { 00:45:23.656 "job": "nvme0n1", 00:45:23.656 "core_mask": "0x2", 00:45:23.656 "workload": "randread", 00:45:23.656 "status": "finished", 00:45:23.656 "queue_depth": 128, 00:45:23.656 "io_size": 4096, 00:45:23.656 "runtime": 1.016045, 00:45:23.656 "iops": 6886.50601105266, 00:45:23.656 "mibps": 26.900414105674454, 00:45:23.656 "io_failed": 0, 00:45:23.656 "io_timeout": 0, 00:45:23.656 "avg_latency_us": 18426.78652671251, 00:45:23.656 "min_latency_us": 10922.666666666666, 00:45:23.656 "max_latency_us": 30292.195555555554 00:45:23.656 } 00:45:23.656 ], 00:45:23.656 "core_count": 1 00:45:23.656 } 00:45:23.656 20:15:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:23.656 20:15:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:23.656 20:15:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:23.656 20:15:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:23.656 20:15:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:23.656 20:15:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:23.656 20:15:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:23.656 20:15:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:23.914 20:15:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:23.914 20:15:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:23.914 20:15:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:23.914 20:15:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:23.914 20:15:13 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:45:23.914 20:15:13 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:23.914 20:15:13 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:23.914 20:15:13 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:23.914 20:15:13 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:23.914 20:15:13 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:23.915 20:15:13 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:23.915 20:15:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:24.173 [2024-10-13 20:15:13.940969] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-10-13 20:15:13.940974] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:24.173 ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:24.173 [2024-10-13 20:15:13.941942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:24.173 [2024-10-13 20:15:13.942923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:24.173 [2024-10-13 20:15:13.942960] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:24.173 [2024-10-13 20:15:13.942985] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:24.173 [2024-10-13 20:15:13.943010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:24.173 request: 00:45:24.173 { 00:45:24.173 "name": "nvme0", 00:45:24.173 "trtype": "tcp", 00:45:24.173 "traddr": "127.0.0.1", 00:45:24.173 "adrfam": "ipv4", 00:45:24.173 "trsvcid": "4420", 00:45:24.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:24.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:24.173 "prchk_reftag": false, 00:45:24.173 "prchk_guard": false, 00:45:24.173 "hdgst": false, 00:45:24.173 "ddgst": false, 00:45:24.173 "psk": ":spdk-test:key1", 00:45:24.173 "allow_unrecognized_csi": false, 00:45:24.173 "method": "bdev_nvme_attach_controller", 00:45:24.173 "req_id": 1 00:45:24.173 } 00:45:24.173 Got JSON-RPC error response 00:45:24.173 response: 00:45:24.173 { 00:45:24.173 "code": -5, 00:45:24.173 "message": "Input/output error" 00:45:24.173 } 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@33 -- # sn=789548059 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 789548059 00:45:24.173 1 links removed 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@33 -- # sn=624802097 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 624802097 00:45:24.173 1 links removed 00:45:24.173 20:15:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3247604 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3247604 ']' 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3247604 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:24.173 20:15:13 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3247604 00:45:24.431 20:15:14 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:24.431 20:15:14 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:24.431 20:15:14 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3247604' 00:45:24.431 killing process with pid 3247604 00:45:24.431 20:15:14 keyring_linux -- common/autotest_common.sh@969 -- # kill 3247604 00:45:24.431 Received shutdown signal, test time was about 1.000000 seconds 00:45:24.431 00:45:24.431 Latency(us) 00:45:24.431 [2024-10-13T18:15:14.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:24.431 [2024-10-13T18:15:14.246Z] =================================================================================================================== 00:45:24.431 [2024-10-13T18:15:14.246Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:24.431 20:15:14 keyring_linux -- common/autotest_common.sh@974 -- # wait 3247604 00:45:25.365 20:15:14 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3247207 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3247207 ']' 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3247207 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3247207 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3247207' 00:45:25.365 killing process with pid 3247207 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@969 -- # kill 3247207 00:45:25.365 20:15:14 keyring_linux -- common/autotest_common.sh@974 -- # wait 3247207 00:45:27.894 00:45:27.894 real 0m9.845s 00:45:27.894 user 0m17.032s 00:45:27.894 sys 0m1.955s 00:45:27.894 20:15:17 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:27.894 20:15:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:27.894 ************************************ 00:45:27.894 END TEST keyring_linux 00:45:27.894 ************************************ 00:45:27.894 20:15:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:27.894 20:15:17 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:45:27.894 20:15:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:27.894 20:15:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:27.894 20:15:17 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:45:27.894 20:15:17 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:45:27.894 20:15:17 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:45:27.894 20:15:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:27.894 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:45:27.894 20:15:17 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:45:27.894 20:15:17 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:45:27.894 20:15:17 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:45:27.894 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:45:29.792 INFO: APP EXITING 00:45:29.792 INFO: killing all VMs 00:45:29.792 INFO: killing vhost app 00:45:29.792 INFO: EXIT DONE 00:45:30.725 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:30.725 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:30.725 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:30.725 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:30.725 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:30.725 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:30.725 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:30.725 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:30.725 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:30.725 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:30.725 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:30.725 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:30.725 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:30.725 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:30.725 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:30.725 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:30.725 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:32.098 Cleaning 00:45:32.098 Removing: /var/run/dpdk/spdk0/config 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:32.098 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:32.098 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:32.098 Removing: /var/run/dpdk/spdk1/config 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:32.098 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:32.098 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:32.098 Removing: /var/run/dpdk/spdk2/config 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:32.098 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:32.098 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:32.098 Removing: /var/run/dpdk/spdk3/config 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:32.098 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:32.098 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:32.098 Removing: /var/run/dpdk/spdk4/config 00:45:32.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:32.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:32.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:32.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:32.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:32.099 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:32.099 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:32.099 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:32.099 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:32.099 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:32.099 Removing: /dev/shm/bdev_svc_trace.1 00:45:32.099 Removing: /dev/shm/nvmf_trace.0 00:45:32.099 Removing: /dev/shm/spdk_tgt_trace.pid2837700 00:45:32.099 Removing: /var/run/dpdk/spdk0 00:45:32.099 Removing: /var/run/dpdk/spdk1 00:45:32.099 Removing: /var/run/dpdk/spdk2 00:45:32.099 Removing: /var/run/dpdk/spdk3 00:45:32.099 Removing: /var/run/dpdk/spdk4 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2834812 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2835949 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2837700 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2838418 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2839380 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2839792 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2840777 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2840916 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2841514 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2843026 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2844100 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2844785 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2845281 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2845885 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2846487 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2846643 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2846926 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2847116 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2847582 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2850339 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2850901 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2851399 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2851702 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2853450 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2853590 00:45:32.099 Removing: /var/run/dpdk/spdk_pid2854947 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2855085 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2855524 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2855672 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2856105 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2856253 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2857295 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2857570 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2857783 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2860416 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2863202 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2870405 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2870856 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2873524 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2873802 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2876722 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2880707 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2883149 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2890886 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2896526 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2897967 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2898770 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2909948 00:45:32.358 Removing: /var/run/dpdk/spdk_pid2912589 00:45:32.359 Removing: /var/run/dpdk/spdk_pid2970110 00:45:32.359 Removing: /var/run/dpdk/spdk_pid2973612 00:45:32.359 Removing: /var/run/dpdk/spdk_pid2978341 00:45:32.359 Removing: /var/run/dpdk/spdk_pid2984158 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3013611 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3016795 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3017975 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3019432 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3019706 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3019988 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3020376 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3021218 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3022670 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3024005 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3024669 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3026699 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3027461 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3028794 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3031544 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3035257 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3035258 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3035259 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3037638 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3040092 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3043624 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3067795 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3070821 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3074980 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3076456 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3078075 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3079584 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3082902 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3085542 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3090798 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3090923 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3093976 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3094226 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3094365 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3094641 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3094765 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3095965 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3097146 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3098325 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3099504 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3100684 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3101876 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3105942 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3106393 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3107699 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3108598 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3112626 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3114676 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3119165 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3122762 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3129523 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3134242 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3134252 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3147401 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3148065 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3148733 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3149504 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3150997 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3151661 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3152206 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3152846 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3155650 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3155926 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3159979 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3160174 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3163780 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3166536 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3173703 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3174107 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3176740 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3177018 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3179913 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3184467 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3186761 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3193801 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3199389 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3200699 00:45:32.359 Removing: /var/run/dpdk/spdk_pid3201486 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3212466 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3214988 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3217737 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3223177 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3223196 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3226336 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3227852 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3229367 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3230235 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3231764 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3232754 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3238426 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3238815 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3239207 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3240972 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3241374 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3241770 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3244215 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3244356 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3246176 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3247207 00:45:32.618 Removing: /var/run/dpdk/spdk_pid3247604 00:45:32.618 Clean 00:45:32.618 20:15:22 -- common/autotest_common.sh@1451 -- # return 0 00:45:32.618 20:15:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:45:32.618 20:15:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:32.618 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:45:32.618 20:15:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:45:32.618 20:15:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:32.618 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:45:32.618 20:15:22 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:32.618 20:15:22 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:32.618 20:15:22 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:32.618 20:15:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:45:32.618 20:15:22 -- spdk/autotest.sh@394 -- # hostname 00:45:32.618 20:15:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:32.876 geninfo: WARNING: invalid characters removed from testname! 00:46:05.046 20:15:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:05.305 20:15:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:08.593 20:15:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:11.126 20:16:00 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:14.413 20:16:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:16.947 20:16:06 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:19.497 20:16:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:19.497 20:16:09 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:46:19.497 20:16:09 -- common/autotest_common.sh@1691 -- $ lcov --version 00:46:19.497 20:16:09 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:46:19.497 20:16:09 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:46:19.497 20:16:09 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:46:19.497 20:16:09 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:46:19.497 20:16:09 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:46:19.497 20:16:09 -- scripts/common.sh@336 -- $ IFS=.-: 00:46:19.497 20:16:09 -- scripts/common.sh@336 -- $ read -ra ver1 00:46:19.497 20:16:09 -- scripts/common.sh@337 -- $ IFS=.-: 00:46:19.497 20:16:09 -- scripts/common.sh@337 -- $ read -ra ver2 00:46:19.497 20:16:09 -- scripts/common.sh@338 -- $ local 'op=<' 00:46:19.497 20:16:09 -- scripts/common.sh@340 -- $ ver1_l=2 00:46:19.497 20:16:09 -- scripts/common.sh@341 -- $ ver2_l=1 00:46:19.497 20:16:09 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:46:19.497 20:16:09 -- scripts/common.sh@344 -- $ case "$op" in 00:46:19.497 20:16:09 -- scripts/common.sh@345 -- $ : 1 00:46:19.497 20:16:09 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:46:19.497 20:16:09 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:19.497 20:16:09 -- scripts/common.sh@365 -- $ decimal 1 00:46:19.497 20:16:09 -- scripts/common.sh@353 -- $ local d=1 00:46:19.497 20:16:09 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:46:19.497 20:16:09 -- scripts/common.sh@355 -- $ echo 1 00:46:19.497 20:16:09 -- scripts/common.sh@365 -- $ ver1[v]=1 00:46:19.497 20:16:09 -- scripts/common.sh@366 -- $ decimal 2 00:46:19.497 20:16:09 -- scripts/common.sh@353 -- $ local d=2 00:46:19.497 20:16:09 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:46:19.497 20:16:09 -- scripts/common.sh@355 -- $ echo 2 00:46:19.497 20:16:09 -- scripts/common.sh@366 -- $ ver2[v]=2 00:46:19.497 20:16:09 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:46:19.497 20:16:09 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:46:19.498 20:16:09 -- scripts/common.sh@368 -- $ return 0 00:46:19.498 20:16:09 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:19.498 20:16:09 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:46:19.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:19.498 --rc genhtml_branch_coverage=1 00:46:19.498 --rc genhtml_function_coverage=1 00:46:19.498 --rc genhtml_legend=1 00:46:19.498 --rc geninfo_all_blocks=1 00:46:19.498 --rc geninfo_unexecuted_blocks=1 00:46:19.498 00:46:19.498 ' 00:46:19.498 20:16:09 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:46:19.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:19.498 --rc genhtml_branch_coverage=1 00:46:19.498 --rc genhtml_function_coverage=1 00:46:19.498 --rc genhtml_legend=1 00:46:19.498 --rc geninfo_all_blocks=1 00:46:19.498 --rc geninfo_unexecuted_blocks=1 00:46:19.498 00:46:19.498 ' 00:46:19.498 20:16:09 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:46:19.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:19.498 --rc genhtml_branch_coverage=1 00:46:19.498 --rc genhtml_function_coverage=1 00:46:19.498 --rc genhtml_legend=1 00:46:19.498 --rc geninfo_all_blocks=1 00:46:19.498 --rc geninfo_unexecuted_blocks=1 00:46:19.498 00:46:19.498 ' 00:46:19.498 20:16:09 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:46:19.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:19.498 --rc genhtml_branch_coverage=1 00:46:19.498 --rc genhtml_function_coverage=1 00:46:19.498 --rc genhtml_legend=1 00:46:19.498 --rc geninfo_all_blocks=1 00:46:19.498 --rc geninfo_unexecuted_blocks=1 00:46:19.498 00:46:19.498 ' 00:46:19.498 20:16:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:19.498 20:16:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:46:19.498 20:16:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:46:19.498 20:16:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:19.498 20:16:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:19.498 20:16:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:19.498 20:16:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:19.498 20:16:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:19.498 20:16:09 -- paths/export.sh@5 -- $ export PATH 00:46:19.498 20:16:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:19.498 20:16:09 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:46:19.498 20:16:09 -- common/autobuild_common.sh@486 -- $ date +%s 00:46:19.498 20:16:09 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728843369.XXXXXX 00:46:19.498 20:16:09 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728843369.JDwcfw 00:46:19.498 20:16:09 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:46:19.498 20:16:09 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:46:19.498 20:16:09 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:46:19.498 20:16:09 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:46:19.498 20:16:09 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:46:19.498 20:16:09 -- common/autobuild_common.sh@502 -- $ get_config_params 00:46:19.498 20:16:09 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:46:19.498 20:16:09 -- common/autotest_common.sh@10 -- $ set +x 00:46:19.498 20:16:09 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:46:19.498 20:16:09 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:46:19.498 20:16:09 -- pm/common@17 -- $ local monitor 00:46:19.498 20:16:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:19.498 20:16:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:19.498 20:16:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:19.498 20:16:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:19.498 20:16:09 -- pm/common@21 -- $ date +%s 00:46:19.498 20:16:09 -- pm/common@25 -- $ sleep 1 00:46:19.498 20:16:09 -- pm/common@21 -- $ date +%s 00:46:19.498 20:16:09 -- pm/common@21 -- $ date +%s 00:46:19.498 20:16:09 -- pm/common@21 -- $ date +%s 00:46:19.498 20:16:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728843369 00:46:19.498 20:16:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728843369 00:46:19.498 20:16:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728843369 00:46:19.498 20:16:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728843369 00:46:19.498 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728843369_collect-vmstat.pm.log 00:46:19.498 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728843369_collect-cpu-load.pm.log 00:46:19.757 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728843369_collect-cpu-temp.pm.log 00:46:19.757 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728843369_collect-bmc-pm.bmc.pm.log 00:46:20.693 20:16:10 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:46:20.693 20:16:10 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:46:20.693 20:16:10 -- spdk/autopackage.sh@14 -- $ timing_finish 00:46:20.693 20:16:10 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:20.693 20:16:10 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:20.693 20:16:10 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:20.693 20:16:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:46:20.693 20:16:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:46:20.693 20:16:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:46:20.693 20:16:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.693 20:16:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:46:20.693 20:16:10 -- pm/common@44 -- $ pid=3261054 00:46:20.693 20:16:10 -- pm/common@50 -- $ kill -TERM 3261054 00:46:20.693 20:16:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.693 20:16:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:46:20.693 20:16:10 -- pm/common@44 -- $ pid=3261056 00:46:20.693 20:16:10 -- pm/common@50 -- $ kill -TERM 3261056 00:46:20.693 20:16:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.693 20:16:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:46:20.693 20:16:10 -- pm/common@44 -- $ pid=3261058 00:46:20.693 20:16:10 -- pm/common@50 -- $ kill -TERM 3261058 00:46:20.693 20:16:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.693 20:16:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:46:20.693 20:16:10 -- pm/common@44 -- $ pid=3261086 00:46:20.693 20:16:10 -- pm/common@50 -- $ sudo -E kill -TERM 3261086 00:46:20.693 + [[ -n 2763860 ]] 00:46:20.693 + sudo kill 2763860 00:46:20.703 [Pipeline] } 00:46:20.718 [Pipeline] // stage 00:46:20.724 [Pipeline] } 00:46:20.738 [Pipeline] // timeout 00:46:20.742 [Pipeline] } 00:46:20.756 [Pipeline] // catchError 00:46:20.760 [Pipeline] } 00:46:20.775 [Pipeline] // wrap 00:46:20.781 [Pipeline] } 00:46:20.793 [Pipeline] // catchError 00:46:20.802 [Pipeline] stage 00:46:20.804 [Pipeline] { (Epilogue) 00:46:20.816 [Pipeline] catchError 00:46:20.818 [Pipeline] { 00:46:20.829 [Pipeline] echo 00:46:20.831 Cleanup processes 00:46:20.837 [Pipeline] sh 00:46:21.124 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:21.125 3261238 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:46:21.125 3261365 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:21.139 [Pipeline] sh 00:46:21.427 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:21.427 ++ grep -v 'sudo pgrep' 00:46:21.427 ++ awk '{print $1}' 00:46:21.427 + sudo kill -9 3261238 00:46:21.440 [Pipeline] sh 00:46:21.725 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:33.948 [Pipeline] sh 00:46:34.237 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:34.237 Artifacts sizes are good 00:46:34.253 [Pipeline] archiveArtifacts 00:46:34.260 Archiving artifacts 00:46:34.449 [Pipeline] sh 00:46:34.756 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:34.772 [Pipeline] cleanWs 00:46:34.783 [WS-CLEANUP] Deleting project workspace... 00:46:34.783 [WS-CLEANUP] Deferred wipeout is used... 00:46:34.790 [WS-CLEANUP] done 00:46:34.792 [Pipeline] } 00:46:34.810 [Pipeline] // catchError 00:46:34.823 [Pipeline] sh 00:46:35.108 + logger -p user.info -t JENKINS-CI 00:46:35.116 [Pipeline] } 00:46:35.130 [Pipeline] // stage 00:46:35.136 [Pipeline] } 00:46:35.151 [Pipeline] // node 00:46:35.156 [Pipeline] End of Pipeline 00:46:35.205 Finished: SUCCESS